Test Report: Hyper-V_Windows 20501

                    
                      4595c49781c9e25c283632264448e235cf0fce36:2025-04-09:39062
                    
                

Test fail (17/140)

x
+
TestErrorSpam/setup (194.26s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-268300 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 --driver=hyperv
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-268300 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 --driver=hyperv: (3m14.2551238s)
error_spam_test.go:96: unexpected stderr: "! Failing to connect to https://registry.k8s.io/ from inside the minikube VM"
error_spam_test.go:96: unexpected stderr: "* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/"
error_spam_test.go:110: minikube stdout:
* [nospam-268300] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
- KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
- MINIKUBE_LOCATION=20501
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-268300" primary control-plane node in "nospam-268300" cluster
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.32.2 on Docker 27.4.0 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-268300" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
--- FAIL: TestErrorSpam/setup (194.26s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (342.73s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0408 23:08:09.127971    9864 config.go:182] Loaded profile config "functional-618200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
functional_test.go:676: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-618200 --alsologtostderr -v=8
E0408 23:08:10.440947    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0408 23:08:38.159855    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:676: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-618200 --alsologtostderr -v=8: exit status 90 (2m29.7984821s)

                                                
                                                
-- stdout --
	* [functional-618200] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=20501
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "functional-618200" primary control-plane node in "functional-618200" cluster
	* Updating the running hyperv "functional-618200" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0408 23:08:09.246712   12728 out.go:345] Setting OutFile to fd 812 ...
	I0408 23:08:09.325819   12728 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 23:08:09.325819   12728 out.go:358] Setting ErrFile to fd 1352...
	I0408 23:08:09.325819   12728 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 23:08:09.346759   12728 out.go:352] Setting JSON to false
	I0408 23:08:09.349936   12728 start.go:129] hostinfo: {"hostname":"minikube6","uptime":10687,"bootTime":1744143002,"procs":176,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5679 Build 19045.5679","kernelVersion":"10.0.19045.5679 Build 19045.5679","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0408 23:08:09.349936   12728 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 23:08:09.354680   12728 out.go:177] * [functional-618200] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
	I0408 23:08:09.360335   12728 notify.go:220] Checking for updates...
	I0408 23:08:09.363251   12728 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0408 23:08:09.365934   12728 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 23:08:09.370015   12728 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0408 23:08:09.372261   12728 out.go:177]   - MINIKUBE_LOCATION=20501
	I0408 23:08:09.376217   12728 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 23:08:09.380199   12728 config.go:182] Loaded profile config "functional-618200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0408 23:08:09.380595   12728 driver.go:404] Setting default libvirt URI to qemu:///system
	I0408 23:08:14.781214   12728 out.go:177] * Using the hyperv driver based on existing profile
	I0408 23:08:14.787195   12728 start.go:297] selected driver: hyperv
	I0408 23:08:14.787195   12728 start.go:901] validating driver "hyperv" against &{Name:functional-618200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 Clust
erName:functional-618200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.113.37 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 23:08:14.788108   12728 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 23:08:14.840719   12728 cni.go:84] Creating CNI manager for ""
	I0408 23:08:14.840719   12728 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 23:08:14.840719   12728 start.go:340] cluster config:
	{Name:functional-618200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-618200 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.113.37 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 23:08:14.840719   12728 iso.go:125] acquiring lock: {Name:mk49322cc4182124f5e9cd1631076166a7ff463d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 23:08:14.844868   12728 out.go:177] * Starting "functional-618200" primary control-plane node in "functional-618200" cluster
	I0408 23:08:14.847279   12728 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0408 23:08:14.847279   12728 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
	I0408 23:08:14.847279   12728 cache.go:56] Caching tarball of preloaded images
	I0408 23:08:14.847279   12728 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0408 23:08:14.847279   12728 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0408 23:08:14.848442   12728 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-618200\config.json ...
	I0408 23:08:14.850635   12728 start.go:360] acquireMachinesLock for functional-618200: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 23:08:14.850635   12728 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-618200"
	I0408 23:08:14.851114   12728 start.go:96] Skipping create...Using existing machine configuration
	I0408 23:08:14.851183   12728 fix.go:54] fixHost starting: 
	I0408 23:08:14.851361   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:17.635558   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:17.636077   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:17.636077   12728 fix.go:112] recreateIfNeeded on functional-618200: state=Running err=<nil>
	W0408 23:08:17.636077   12728 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 23:08:17.641199   12728 out.go:177] * Updating the running hyperv "functional-618200" VM ...
	I0408 23:08:17.643270   12728 machine.go:93] provisionDockerMachine start ...
	I0408 23:08:17.643828   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:19.832353   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:19.832353   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:19.833486   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:08:22.348787   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:08:22.348787   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:22.354331   12728 main.go:141] libmachine: Using SSH client type: native
	I0408 23:08:22.354942   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1427d00] 0x142a840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:08:22.354942   12728 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 23:08:22.482052   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-618200
	
	I0408 23:08:22.482109   12728 buildroot.go:166] provisioning hostname "functional-618200"
	I0408 23:08:22.482218   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:24.614743   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:24.615199   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:24.615199   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:08:27.116022   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:08:27.116669   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:27.122660   12728 main.go:141] libmachine: Using SSH client type: native
	I0408 23:08:27.122837   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1427d00] 0x142a840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:08:27.122837   12728 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-618200 && echo "functional-618200" | sudo tee /etc/hostname
	I0408 23:08:27.296048   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-618200
	
	I0408 23:08:27.296048   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:29.515938   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:29.516732   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:29.516860   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:08:32.104430   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:08:32.104430   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:32.111087   12728 main.go:141] libmachine: Using SSH client type: native
	I0408 23:08:32.111822   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1427d00] 0x142a840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:08:32.111822   12728 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-618200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-618200/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-618200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 23:08:32.239307   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 23:08:32.239307   12728 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0408 23:08:32.239307   12728 buildroot.go:174] setting up certificates
	I0408 23:08:32.239307   12728 provision.go:84] configureAuth start
	I0408 23:08:32.239907   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:34.375660   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:34.376637   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:34.376637   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:08:36.940152   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:08:36.940811   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:36.940910   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:39.102003   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:39.102003   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:39.102003   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:08:41.651752   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:08:41.651752   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:41.651752   12728 provision.go:143] copyHostCerts
	I0408 23:08:41.652744   12728 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0408 23:08:41.653241   12728 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0408 23:08:41.653241   12728 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0408 23:08:41.653897   12728 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0408 23:08:41.655530   12728 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0408 23:08:41.655919   12728 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0408 23:08:41.655919   12728 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0408 23:08:41.656607   12728 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0408 23:08:41.657919   12728 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0408 23:08:41.658240   12728 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0408 23:08:41.658370   12728 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0408 23:08:41.658791   12728 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0408 23:08:41.659993   12728 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-618200 san=[127.0.0.1 192.168.113.37 functional-618200 localhost minikube]
	I0408 23:08:41.724180   12728 provision.go:177] copyRemoteCerts
	I0408 23:08:41.734528   12728 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 23:08:41.734661   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:43.857555   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:43.858453   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:43.858453   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:08:46.376433   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:08:46.376433   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:46.376862   12728 sshutil.go:53] new ssh client: &{IP:192.168.113.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-618200\id_rsa Username:docker}
	I0408 23:08:46.479933   12728 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7452489s)
	I0408 23:08:46.479933   12728 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0408 23:08:46.480251   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0408 23:08:46.526275   12728 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0408 23:08:46.526275   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0408 23:08:46.571513   12728 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0408 23:08:46.571513   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 23:08:46.618636   12728 provision.go:87] duration metric: took 14.3791442s to configureAuth
	I0408 23:08:46.618636   12728 buildroot.go:189] setting minikube options for container-runtime
	I0408 23:08:46.619360   12728 config.go:182] Loaded profile config "functional-618200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0408 23:08:46.619360   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:48.759145   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:48.759997   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:48.760072   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:08:51.352431   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:08:51.352840   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:51.358422   12728 main.go:141] libmachine: Using SSH client type: native
	I0408 23:08:51.359181   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1427d00] 0x142a840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:08:51.359181   12728 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0408 23:08:51.498239   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0408 23:08:51.498239   12728 buildroot.go:70] root file system type: tmpfs
	I0408 23:08:51.499500   12728 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0408 23:08:51.499565   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:53.639609   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:53.639609   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:53.639706   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:08:56.165286   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:08:56.165286   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:56.172269   12728 main.go:141] libmachine: Using SSH client type: native
	I0408 23:08:56.172483   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1427d00] 0x142a840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:08:56.172483   12728 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0408 23:08:56.329047   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0408 23:08:56.329209   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:58.408221   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:58.408271   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:58.408271   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:09:00.972449   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:09:00.972449   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:00.978298   12728 main.go:141] libmachine: Using SSH client type: native
	I0408 23:09:00.979066   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1427d00] 0x142a840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:09:00.979150   12728 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0408 23:09:01.120743   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 23:09:01.120743   12728 machine.go:96] duration metric: took 43.4763536s to provisionDockerMachine
	I0408 23:09:01.120743   12728 start.go:293] postStartSetup for "functional-618200" (driver="hyperv")
	I0408 23:09:01.120743   12728 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 23:09:01.134465   12728 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 23:09:01.134586   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:09:03.239597   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:09:03.239597   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:03.240300   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:09:05.769173   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:09:05.769791   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:05.769977   12728 sshutil.go:53] new ssh client: &{IP:192.168.113.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-618200\id_rsa Username:docker}
	I0408 23:09:05.882717   12728 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7480703s)
	I0408 23:09:05.895357   12728 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 23:09:05.906701   12728 command_runner.go:130] > NAME=Buildroot
	I0408 23:09:05.906871   12728 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0408 23:09:05.906871   12728 command_runner.go:130] > ID=buildroot
	I0408 23:09:05.906871   12728 command_runner.go:130] > VERSION_ID=2023.02.9
	I0408 23:09:05.906871   12728 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0408 23:09:05.906871   12728 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 23:09:05.906871   12728 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0408 23:09:05.907746   12728 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0408 23:09:05.909230   12728 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> 98642.pem in /etc/ssl/certs
	I0408 23:09:05.909297   12728 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> /etc/ssl/certs/98642.pem
	I0408 23:09:05.909974   12728 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9864\hosts -> hosts in /etc/test/nested/copy/9864
	I0408 23:09:05.909974   12728 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9864\hosts -> /etc/test/nested/copy/9864/hosts
	I0408 23:09:05.922022   12728 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/9864
	I0408 23:09:05.940207   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem --> /etc/ssl/certs/98642.pem (1708 bytes)
	I0408 23:09:05.986656   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9864\hosts --> /etc/test/nested/copy/9864/hosts (40 bytes)
	I0408 23:09:06.037448   12728 start.go:296] duration metric: took 4.9164478s for postStartSetup
	I0408 23:09:06.037545   12728 fix.go:56] duration metric: took 51.1857011s for fixHost
	I0408 23:09:06.037624   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:09:08.158094   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:09:08.158094   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:08.158094   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:09:10.681527   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:09:10.681527   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:10.688411   12728 main.go:141] libmachine: Using SSH client type: native
	I0408 23:09:10.689102   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1427d00] 0x142a840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:09:10.689245   12728 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0408 23:09:10.829582   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744153750.860325411
	
	I0408 23:09:10.829582   12728 fix.go:216] guest clock: 1744153750.860325411
	I0408 23:09:10.829683   12728 fix.go:229] Guest: 2025-04-08 23:09:10.860325411 +0000 UTC Remote: 2025-04-08 23:09:06.0375451 +0000 UTC m=+56.890513901 (delta=4.822780311s)
	I0408 23:09:10.829858   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:09:12.957017   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:09:12.957017   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:12.957017   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:09:15.521412   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:09:15.521412   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:15.527916   12728 main.go:141] libmachine: Using SSH client type: native
	I0408 23:09:15.528634   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1427d00] 0x142a840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:09:15.528634   12728 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1744153750
	I0408 23:09:15.671072   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr  8 23:09:10 UTC 2025
	
	I0408 23:09:15.671072   12728 fix.go:236] clock set: Tue Apr  8 23:09:10 UTC 2025
	 (err=<nil>)
	I0408 23:09:15.671072   12728 start.go:83] releasing machines lock for "functional-618200", held for 1m0.8196519s
	I0408 23:09:15.671072   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:09:17.795924   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:09:17.795924   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:17.795924   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:09:20.343976   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:09:20.344152   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:20.347691   12728 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0408 23:09:20.347691   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:09:20.358515   12728 ssh_runner.go:195] Run: cat /version.json
	I0408 23:09:20.358515   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:09:22.544260   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:09:22.544260   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:22.544260   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:09:22.547450   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:09:22.547450   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:22.547565   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:09:25.306292   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:09:25.306292   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:25.306292   12728 sshutil.go:53] new ssh client: &{IP:192.168.113.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-618200\id_rsa Username:docker}
	I0408 23:09:25.329784   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:09:25.330858   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:25.330972   12728 sshutil.go:53] new ssh client: &{IP:192.168.113.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-618200\id_rsa Username:docker}
	I0408 23:09:25.407167   12728 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0408 23:09:25.407167   12728 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.0594111s)
	W0408 23:09:25.407380   12728 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0408 23:09:25.427823   12728 command_runner.go:130] > {"iso_version": "v1.35.0", "kicbase_version": "v0.0.45-1736763277-20236", "minikube_version": "v1.35.0", "commit": "3fb24bd87c8c8761e2515e1a9ee13835a389ed68"}
	I0408 23:09:25.427823   12728 ssh_runner.go:235] Completed: cat /version.json: (5.0692422s)
	I0408 23:09:25.441651   12728 ssh_runner.go:195] Run: systemctl --version
	I0408 23:09:25.452009   12728 command_runner.go:130] > systemd 252 (252)
	I0408 23:09:25.452009   12728 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0408 23:09:25.462226   12728 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0408 23:09:25.470182   12728 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0408 23:09:25.470647   12728 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 23:09:25.483329   12728 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 23:09:25.504611   12728 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0408 23:09:25.504611   12728 start.go:495] detecting cgroup driver to use...
	I0408 23:09:25.505055   12728 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0408 23:09:25.518103   12728 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0408 23:09:25.518165   12728 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0408 23:09:25.545691   12728 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0408 23:09:25.557677   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0408 23:09:25.585837   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0408 23:09:25.605727   12728 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0408 23:09:25.616269   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0408 23:09:25.648654   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 23:09:25.682043   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0408 23:09:25.712502   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 23:09:25.745703   12728 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 23:09:25.776089   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0408 23:09:25.813738   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0408 23:09:25.847440   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0408 23:09:25.878964   12728 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 23:09:25.897917   12728 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0408 23:09:25.910039   12728 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 23:09:25.937635   12728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:09:26.191579   12728 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0408 23:09:26.223263   12728 start.go:495] detecting cgroup driver to use...
	I0408 23:09:26.235750   12728 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0408 23:09:26.260048   12728 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0408 23:09:26.260125   12728 command_runner.go:130] > [Unit]
	I0408 23:09:26.260125   12728 command_runner.go:130] > Description=Docker Application Container Engine
	I0408 23:09:26.260125   12728 command_runner.go:130] > Documentation=https://docs.docker.com
	I0408 23:09:26.260200   12728 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0408 23:09:26.260200   12728 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0408 23:09:26.260200   12728 command_runner.go:130] > StartLimitBurst=3
	I0408 23:09:26.260200   12728 command_runner.go:130] > StartLimitIntervalSec=60
	I0408 23:09:26.260200   12728 command_runner.go:130] > [Service]
	I0408 23:09:26.260200   12728 command_runner.go:130] > Type=notify
	I0408 23:09:26.260200   12728 command_runner.go:130] > Restart=on-failure
	I0408 23:09:26.260338   12728 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0408 23:09:26.260338   12728 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0408 23:09:26.260338   12728 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0408 23:09:26.260338   12728 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0408 23:09:26.260338   12728 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0408 23:09:26.260472   12728 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0408 23:09:26.260472   12728 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0408 23:09:26.260472   12728 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0408 23:09:26.260472   12728 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0408 23:09:26.260472   12728 command_runner.go:130] > ExecStart=
	I0408 23:09:26.260472   12728 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0408 23:09:26.260581   12728 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0408 23:09:26.260581   12728 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0408 23:09:26.260581   12728 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0408 23:09:26.260581   12728 command_runner.go:130] > LimitNOFILE=infinity
	I0408 23:09:26.260678   12728 command_runner.go:130] > LimitNPROC=infinity
	I0408 23:09:26.260707   12728 command_runner.go:130] > LimitCORE=infinity
	I0408 23:09:26.260707   12728 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0408 23:09:26.260707   12728 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0408 23:09:26.260764   12728 command_runner.go:130] > TasksMax=infinity
	I0408 23:09:26.260764   12728 command_runner.go:130] > TimeoutStartSec=0
	I0408 23:09:26.260764   12728 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0408 23:09:26.260764   12728 command_runner.go:130] > Delegate=yes
	I0408 23:09:26.260802   12728 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0408 23:09:26.260802   12728 command_runner.go:130] > KillMode=process
	I0408 23:09:26.260847   12728 command_runner.go:130] > [Install]
	I0408 23:09:26.260847   12728 command_runner.go:130] > WantedBy=multi-user.target
	I0408 23:09:26.272013   12728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 23:09:26.309047   12728 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 23:09:26.364238   12728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 23:09:26.397809   12728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0408 23:09:26.420470   12728 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 23:09:26.452776   12728 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0408 23:09:26.465171   12728 ssh_runner.go:195] Run: which cri-dockerd
	I0408 23:09:26.471612   12728 command_runner.go:130] > /usr/bin/cri-dockerd
	I0408 23:09:26.483601   12728 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0408 23:09:26.500243   12728 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0408 23:09:26.541951   12728 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0408 23:09:26.818543   12728 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0408 23:09:27.059393   12728 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0408 23:09:27.059393   12728 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0408 23:09:27.105693   12728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:09:27.332438   12728 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0408 23:10:38.780025   12728 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0408 23:10:38.780100   12728 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0408 23:10:38.783775   12728 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.4502693s)
	I0408 23:10:38.797107   12728 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0408 23:10:38.826638   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	I0408 23:10:38.826758   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.094333857Z" level=info msg="Starting up"
	I0408 23:10:38.826758   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.095749501Z" level=info msg="containerd not running, starting managed containerd"
	I0408 23:10:38.826815   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.097506580Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=673
	I0408 23:10:38.826815   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.128963677Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0408 23:10:38.826815   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152469766Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0408 23:10:38.826815   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152558876Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0408 23:10:38.826815   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152717392Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0408 23:10:38.826815   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152739794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.827006   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152812201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.827006   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152901110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.827074   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153079328Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.827097   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153169038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.827157   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153187739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.827181   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153197940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.827181   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153293950Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.827260   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153812303Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.827260   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156561482Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.827340   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156716198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156848512Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156952822Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.157044531Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.157169744Z" level=info msg="metadata content store policy set" policy=shared
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190389421Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190521734Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190544737Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190560338Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190576740Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190838067Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191154799Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191361820Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191472031Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191493633Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191512135Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191527737Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191541238Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191555639Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191571341Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191603144Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.827985   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191615846Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.828081   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191628447Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.828188   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191749659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828234   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191774162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828234   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191800364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828234   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191815666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828308   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191830867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828308   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191844669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828356   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191857670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828356   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191870171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828426   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191882273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828426   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191897274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828489   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191908775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828524   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191920677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828613   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191932778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828649   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191947379Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0408 23:10:38.828649   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191967081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828790   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191979383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828855   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191992484Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0408 23:10:38.828855   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192114796Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0408 23:10:38.828935   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192196605Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192262611Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192291214Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192304416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192318917Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192331918Z" level=info msg="NRI interface is disabled by configuration."
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193151202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193285015Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193371424Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193820570Z" level=info msg="containerd successfully booted in 0.066941s"
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.170474987Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.203429127Z" level=info msg="Loading containers: start."
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.350665658Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.583414712Z" level=info msg="Loading containers: done."
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.608611503Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.608776419Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.609056647Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.609260067Z" level=info msg="Daemon has completed initialization"
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.713909013Z" level=info msg="API listen on /var/run/docker.sock"
	I0408 23:10:38.829565   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.714066029Z" level=info msg="API listen on [::]:2376"
	I0408 23:10:38.829565   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 systemd[1]: Started Docker Application Container Engine.
	I0408 23:10:38.829625   12728 command_runner.go:130] > Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.811241096Z" level=info msg="Processing signal 'terminated'"
	I0408 23:10:38.829625   12728 command_runner.go:130] > Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813084503Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0408 23:10:38.829625   12728 command_runner.go:130] > Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813257403Z" level=info msg="Daemon shutdown complete"
	I0408 23:10:38.829625   12728 command_runner.go:130] > Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813288003Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0408 23:10:38.829753   12728 command_runner.go:130] > Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813374004Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0408 23:10:38.829753   12728 command_runner.go:130] > Apr 08 23:07:20 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	I0408 23:10:38.829790   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	I0408 23:10:38.829942   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	I0408 23:10:38.829942   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	I0408 23:10:38.829942   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.861204748Z" level=info msg="Starting up"
	I0408 23:10:38.830042   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.863521556Z" level=info msg="containerd not running, starting managed containerd"
	I0408 23:10:38.830042   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.864856161Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1097
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.891008554Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913514335Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913559535Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913591835Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913605435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913626835Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913637435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913748735Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913963436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913985636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913996836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.914019636Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.914159537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.916995847Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917087147Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917210048Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917295148Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0408 23:10:38.830797   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917328148Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0408 23:10:38.830797   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917346448Z" level=info msg="metadata content store policy set" policy=shared
	I0408 23:10:38.830797   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917634649Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0408 23:10:38.830869   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917741950Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0408 23:10:38.830869   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917760750Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0408 23:10:38.830869   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917900050Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0408 23:10:38.830869   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917914850Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0408 23:10:38.830869   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917957150Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918196151Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918327452Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918413452Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918430852Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918442352Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918453152Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918462452Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918473352Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918484552Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918499152Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.831194   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918509952Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.831194   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918520052Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.831194   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918543853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831194   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918558553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831300   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918568953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831300   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918579553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831300   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918589553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831377   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918609253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831377   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918626253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831422   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918638253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831442   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918657853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831442   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918673253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831442   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918682953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918692253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918702953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918715553Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918733953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918744753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918754653Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918959554Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919161355Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919325455Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919361655Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919372055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919407356Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919416356Z" level=info msg="NRI interface is disabled by configuration."
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919735157Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919968758Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.920117658Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.920171758Z" level=info msg="containerd successfully booted in 0.029982s"
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:22 functional-618200 dockerd[1091]: time="2025-04-08T23:07:22.908709690Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:22 functional-618200 dockerd[1091]: time="2025-04-08T23:07:22.934950284Z" level=info msg="Loading containers: start."
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.062615440Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.175164242Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.282062124Z" level=info msg="Loading containers: done."
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.305666909Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.305777709Z" level=info msg="Daemon has completed initialization"
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.341856738Z" level=info msg="API listen on /var/run/docker.sock"
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:23 functional-618200 systemd[1]: Started Docker Application Container Engine.
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.343491744Z" level=info msg="API listen on [::]:2376"
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:32 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.905143108Z" level=info msg="Processing signal 'terminated'"
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906371813Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906906114Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.907033815Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906918515Z" level=info msg="Daemon shutdown complete"
	I0408 23:10:38.832201   12728 command_runner.go:130] > Apr 08 23:07:33 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	I0408 23:10:38.832201   12728 command_runner.go:130] > Apr 08 23:07:33 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	I0408 23:10:38.832201   12728 command_runner.go:130] > Apr 08 23:07:33 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	I0408 23:10:38.832201   12728 command_runner.go:130] > Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.955484761Z" level=info msg="Starting up"
	I0408 23:10:38.832201   12728 command_runner.go:130] > Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.957042767Z" level=info msg="containerd not running, starting managed containerd"
	I0408 23:10:38.832402   12728 command_runner.go:130] > Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.958462672Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1462
	I0408 23:10:38.832402   12728 command_runner.go:130] > Apr 08 23:07:33 functional-618200 dockerd[1462]: time="2025-04-08T23:07:33.983507761Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0408 23:10:38.832440   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009132353Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0408 23:10:38.832440   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009242353Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0408 23:10:38.832490   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009307753Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0408 23:10:38.832524   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009324953Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.832524   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009354454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.832569   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009383954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.832619   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009545254Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.832619   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009658655Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.832619   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009680555Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.832619   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009691855Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.832619   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009717555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.832745   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.010024356Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.832794   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012580665Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.832826   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012671765Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.832878   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012945166Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.832917   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013039867Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0408 23:10:38.832917   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013070567Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0408 23:10:38.832917   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013104967Z" level=info msg="metadata content store policy set" policy=shared
	I0408 23:10:38.832975   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013460968Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0408 23:10:38.832996   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013562869Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013583269Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013598369Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013611569Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013659269Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014010570Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014156471Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014247371Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014266571Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014280071Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014397172Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014425272Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014441672Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014458272Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014472772Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014498972Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014515572Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014537972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014555672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014570972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833567   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014585972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833567   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014601072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014615672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014629372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014643572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014658573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014679173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014709673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014738473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014783273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014916873Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014942274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014955574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014969174Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015051774Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015092874Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015107074Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015122374Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015133174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015147174Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015158874Z" level=info msg="NRI interface is disabled by configuration."
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015573476Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015638476Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015690176Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015715476Z" level=info msg="containerd successfully booted in 0.033079s"
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:35 functional-618200 dockerd[1456]: time="2025-04-08T23:07:35.262471031Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:37 functional-618200 dockerd[1456]: time="2025-04-08T23:07:37.762713164Z" level=info msg="Loading containers: start."
	I0408 23:10:38.834237   12728 command_runner.go:130] > Apr 08 23:07:37 functional-618200 dockerd[1456]: time="2025-04-08T23:07:37.897446846Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0408 23:10:38.834237   12728 command_runner.go:130] > Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.015338367Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0408 23:10:38.834237   12728 command_runner.go:130] > Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.153824862Z" level=info msg="Loading containers: done."
	I0408 23:10:38.834237   12728 command_runner.go:130] > Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.182692065Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0408 23:10:38.834237   12728 command_runner.go:130] > Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.182937366Z" level=info msg="Daemon has completed initialization"
	I0408 23:10:38.834237   12728 command_runner.go:130] > Apr 08 23:07:38 functional-618200 systemd[1]: Started Docker Application Container Engine.
	I0408 23:10:38.834237   12728 command_runner.go:130] > Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.220981402Z" level=info msg="API listen on /var/run/docker.sock"
	I0408 23:10:38.834375   12728 command_runner.go:130] > Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.221045402Z" level=info msg="API listen on [::]:2376"
	I0408 23:10:38.834375   12728 command_runner.go:130] > Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928174323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.834375   12728 command_runner.go:130] > Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928255628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.834443   12728 command_runner.go:130] > Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928274329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834443   12728 command_runner.go:130] > Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928976471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834533   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011163114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011256119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011273420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011437330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.047888267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048098278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048281989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048657110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089143872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089470391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089714404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.090374541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.331240402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.331940241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.332248459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.332901095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587350115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587733437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587951349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.588255466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643351545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643476652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643513354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835183   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643620460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835183   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681369670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.835183   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681570881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.835294   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681658686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835294   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.682028307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835294   12728 command_runner.go:130] > Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.094044455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.835373   12728 command_runner.go:130] > Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.094486867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.835373   12728 command_runner.go:130] > Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.095561595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835373   12728 command_runner.go:130] > Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.097530446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835463   12728 command_runner.go:130] > Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394114311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.835463   12728 command_runner.go:130] > Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394433319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.835567   12728 command_runner.go:130] > Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394665025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835567   12728 command_runner.go:130] > Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.395349443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835567   12728 command_runner.go:130] > Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643182806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.835567   12728 command_runner.go:130] > Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643370211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.835723   12728 command_runner.go:130] > Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643392711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835723   12728 command_runner.go:130] > Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.645053352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835723   12728 command_runner.go:130] > Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216296816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.835827   12728 command_runner.go:130] > Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216387017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.835827   12728 command_runner.go:130] > Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216402117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835887   12728 command_runner.go:130] > Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216977424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.540620784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.540963288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.541044989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.541180590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.848480641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.850292361Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.850566464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.851150170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.385762643Z" level=info msg="Processing signal 'terminated'"
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574335274Z" level=info msg="shim disconnected" id=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574507675Z" level=warning msg="cleaning up after shim disconnected" id=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574520575Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.575374478Z" level=info msg="ignoring event" container=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.602965785Z" level=info msg="ignoring event" container=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.603895489Z" level=info msg="shim disconnected" id=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.604175090Z" level=warning msg="cleaning up after shim disconnected" id=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.604242890Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614380530Z" level=info msg="shim disconnected" id=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614605231Z" level=warning msg="cleaning up after shim disconnected" id=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614742231Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.620402053Z" level=info msg="ignoring event" container=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.620802455Z" level=info msg="shim disconnected" id=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.621015255Z" level=info msg="ignoring event" container=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.621947059Z" level=info msg="ignoring event" container=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.622304660Z" level=info msg="ignoring event" container=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622827062Z" level=warning msg="cleaning up after shim disconnected" id=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.623203064Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622314560Z" level=info msg="shim disconnected" id=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c namespace=moby
	I0408 23:10:38.836542   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.624293868Z" level=warning msg="cleaning up after shim disconnected" id=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c namespace=moby
	I0408 23:10:38.836542   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.624306868Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.836542   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622381461Z" level=info msg="shim disconnected" id=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c namespace=moby
	I0408 23:10:38.836542   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.631193795Z" level=warning msg="cleaning up after shim disconnected" id=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.631249695Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.667400535Z" level=info msg="ignoring event" container=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.669623644Z" level=info msg="shim disconnected" id=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.672188454Z" level=warning msg="cleaning up after shim disconnected" id=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.672924657Z" level=info msg="ignoring event" container=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.673767960Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.681394990Z" level=info msg="ignoring event" container=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.681607190Z" level=info msg="ignoring event" container=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.681903492Z" level=info msg="shim disconnected" id=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.685272405Z" level=warning msg="cleaning up after shim disconnected" id=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.685407505Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.671723952Z" level=info msg="shim disconnected" id=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.693693137Z" level=warning msg="cleaning up after shim disconnected" id=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.693789338Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697563052Z" level=info msg="shim disconnected" id=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697641053Z" level=warning msg="cleaning up after shim disconnected" id=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697654453Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.725345060Z" level=info msg="ignoring event" container=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.725697262Z" level=info msg="shim disconnected" id=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa namespace=moby
	I0408 23:10:38.837349   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.725980963Z" level=warning msg="cleaning up after shim disconnected" id=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa namespace=moby
	I0408 23:10:38.837349   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.726206964Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.837349   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.734018694Z" level=info msg="ignoring event" container=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.837349   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.736798905Z" level=info msg="shim disconnected" id=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 namespace=moby
	I0408 23:10:38.837581   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.737017505Z" level=warning msg="cleaning up after shim disconnected" id=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 namespace=moby
	I0408 23:10:38.837581   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.737255906Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.837581   12728 command_runner.go:130] > Apr 08 23:09:32 functional-618200 dockerd[1456]: time="2025-04-08T23:09:32.552363388Z" level=info msg="ignoring event" container=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.837653   12728 command_runner.go:130] > Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556138103Z" level=info msg="shim disconnected" id=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c namespace=moby
	I0408 23:10:38.837653   12728 command_runner.go:130] > Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556756905Z" level=warning msg="cleaning up after shim disconnected" id=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c namespace=moby
	I0408 23:10:38.837653   12728 command_runner.go:130] > Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556921006Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.837999   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.565876302Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.643029581Z" level=info msg="ignoring event" container=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.646699056Z" level=info msg="shim disconnected" id=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f namespace=moby
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.647140153Z" level=warning msg="cleaning up after shim disconnected" id=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f namespace=moby
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.647214253Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724363532Z" level=info msg="Daemon shutdown complete"
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724563130Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724658330Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724794029Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:38 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:38 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:38 functional-618200 systemd[1]: docker.service: Consumed 4.925s CPU time.
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:38 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:38 functional-618200 dockerd[3978]: time="2025-04-08T23:09:38.782261701Z" level=info msg="Starting up"
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:10:38 functional-618200 dockerd[3978]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:10:38 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:10:38 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:10:38 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	I0408 23:10:38.863518   12728 out.go:201] 
	W0408 23:10:38.867350   12728 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 08 23:06:49 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.094333857Z" level=info msg="Starting up"
	Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.095749501Z" level=info msg="containerd not running, starting managed containerd"
	Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.097506580Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=673
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.128963677Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152469766Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152558876Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152717392Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152739794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152812201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152901110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153079328Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153169038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153187739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153197940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153293950Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153812303Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156561482Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156716198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156848512Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156952822Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.157044531Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.157169744Z" level=info msg="metadata content store policy set" policy=shared
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190389421Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190521734Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190544737Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190560338Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190576740Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190838067Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191154799Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191361820Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191472031Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191493633Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191512135Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191527737Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191541238Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191555639Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191571341Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191603144Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191615846Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191628447Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191749659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191774162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191800364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191815666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191830867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191844669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191857670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191870171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191882273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191897274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191908775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191920677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191932778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191947379Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191967081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191979383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191992484Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192114796Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192196605Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192262611Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192291214Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192304416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192318917Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192331918Z" level=info msg="NRI interface is disabled by configuration."
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193151202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193285015Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193371424Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193820570Z" level=info msg="containerd successfully booted in 0.066941s"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.170474987Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.203429127Z" level=info msg="Loading containers: start."
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.350665658Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.583414712Z" level=info msg="Loading containers: done."
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.608611503Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.608776419Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.609056647Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.609260067Z" level=info msg="Daemon has completed initialization"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.713909013Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.714066029Z" level=info msg="API listen on [::]:2376"
	Apr 08 23:06:50 functional-618200 systemd[1]: Started Docker Application Container Engine.
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.811241096Z" level=info msg="Processing signal 'terminated'"
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813084503Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813257403Z" level=info msg="Daemon shutdown complete"
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813288003Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813374004Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 08 23:07:20 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	Apr 08 23:07:21 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	Apr 08 23:07:21 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:07:21 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.861204748Z" level=info msg="Starting up"
	Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.863521556Z" level=info msg="containerd not running, starting managed containerd"
	Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.864856161Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1097
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.891008554Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913514335Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913559535Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913591835Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913605435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913626835Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913637435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913748735Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913963436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913985636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913996836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.914019636Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.914159537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.916995847Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917087147Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917210048Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917295148Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917328148Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917346448Z" level=info msg="metadata content store policy set" policy=shared
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917634649Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917741950Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917760750Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917900050Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917914850Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917957150Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918196151Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918327452Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918413452Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918430852Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918442352Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918453152Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918462452Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918473352Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918484552Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918499152Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918509952Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918520052Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918543853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918558553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918568953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918579553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918589553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918609253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918626253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918638253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918657853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918673253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918682953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918692253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918702953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918715553Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918733953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918744753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918754653Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918959554Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919161355Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919325455Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919361655Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919372055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919407356Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919416356Z" level=info msg="NRI interface is disabled by configuration."
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919735157Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919968758Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.920117658Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.920171758Z" level=info msg="containerd successfully booted in 0.029982s"
	Apr 08 23:07:22 functional-618200 dockerd[1091]: time="2025-04-08T23:07:22.908709690Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 08 23:07:22 functional-618200 dockerd[1091]: time="2025-04-08T23:07:22.934950284Z" level=info msg="Loading containers: start."
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.062615440Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.175164242Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.282062124Z" level=info msg="Loading containers: done."
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.305666909Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.305777709Z" level=info msg="Daemon has completed initialization"
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.341856738Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 08 23:07:23 functional-618200 systemd[1]: Started Docker Application Container Engine.
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.343491744Z" level=info msg="API listen on [::]:2376"
	Apr 08 23:07:32 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.905143108Z" level=info msg="Processing signal 'terminated'"
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906371813Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906906114Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.907033815Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906918515Z" level=info msg="Daemon shutdown complete"
	Apr 08 23:07:33 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	Apr 08 23:07:33 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:07:33 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.955484761Z" level=info msg="Starting up"
	Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.957042767Z" level=info msg="containerd not running, starting managed containerd"
	Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.958462672Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1462
	Apr 08 23:07:33 functional-618200 dockerd[1462]: time="2025-04-08T23:07:33.983507761Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009132353Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009242353Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009307753Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009324953Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009354454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009383954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009545254Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009658655Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009680555Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009691855Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009717555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.010024356Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012580665Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012671765Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012945166Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013039867Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013070567Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013104967Z" level=info msg="metadata content store policy set" policy=shared
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013460968Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013562869Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013583269Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013598369Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013611569Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013659269Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014010570Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014156471Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014247371Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014266571Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014280071Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014397172Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014425272Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014441672Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014458272Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014472772Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014498972Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014515572Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014537972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014555672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014570972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014585972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014601072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014615672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014629372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014643572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014658573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014679173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014709673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014738473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014783273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014916873Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014942274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014955574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014969174Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015051774Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015092874Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015107074Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015122374Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015133174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015147174Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015158874Z" level=info msg="NRI interface is disabled by configuration."
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015573476Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015638476Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015690176Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015715476Z" level=info msg="containerd successfully booted in 0.033079s"
	Apr 08 23:07:35 functional-618200 dockerd[1456]: time="2025-04-08T23:07:35.262471031Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 08 23:07:37 functional-618200 dockerd[1456]: time="2025-04-08T23:07:37.762713164Z" level=info msg="Loading containers: start."
	Apr 08 23:07:37 functional-618200 dockerd[1456]: time="2025-04-08T23:07:37.897446846Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.015338367Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.153824862Z" level=info msg="Loading containers: done."
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.182692065Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.182937366Z" level=info msg="Daemon has completed initialization"
	Apr 08 23:07:38 functional-618200 systemd[1]: Started Docker Application Container Engine.
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.220981402Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.221045402Z" level=info msg="API listen on [::]:2376"
	Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928174323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928255628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928274329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928976471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011163114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011256119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011273420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011437330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.047888267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048098278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048281989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048657110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089143872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089470391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089714404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.090374541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.331240402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.331940241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.332248459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.332901095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587350115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587733437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587951349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.588255466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643351545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643476652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643513354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643620460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681369670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681570881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681658686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.682028307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.094044455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.094486867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.095561595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.097530446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394114311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394433319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394665025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.395349443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643182806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643370211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643392711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.645053352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216296816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216387017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216402117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216977424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.540620784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.540963288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.541044989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.541180590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.848480641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.850292361Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.850566464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.851150170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.385762643Z" level=info msg="Processing signal 'terminated'"
	Apr 08 23:09:27 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574335274Z" level=info msg="shim disconnected" id=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574507675Z" level=warning msg="cleaning up after shim disconnected" id=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574520575Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.575374478Z" level=info msg="ignoring event" container=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.602965785Z" level=info msg="ignoring event" container=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.603895489Z" level=info msg="shim disconnected" id=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.604175090Z" level=warning msg="cleaning up after shim disconnected" id=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.604242890Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614380530Z" level=info msg="shim disconnected" id=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614605231Z" level=warning msg="cleaning up after shim disconnected" id=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614742231Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.620402053Z" level=info msg="ignoring event" container=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.620802455Z" level=info msg="shim disconnected" id=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.621015255Z" level=info msg="ignoring event" container=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.621947059Z" level=info msg="ignoring event" container=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.622304660Z" level=info msg="ignoring event" container=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622827062Z" level=warning msg="cleaning up after shim disconnected" id=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.623203064Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622314560Z" level=info msg="shim disconnected" id=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.624293868Z" level=warning msg="cleaning up after shim disconnected" id=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.624306868Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622381461Z" level=info msg="shim disconnected" id=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.631193795Z" level=warning msg="cleaning up after shim disconnected" id=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.631249695Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.667400535Z" level=info msg="ignoring event" container=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.669623644Z" level=info msg="shim disconnected" id=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.672188454Z" level=warning msg="cleaning up after shim disconnected" id=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.672924657Z" level=info msg="ignoring event" container=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.673767960Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.681394990Z" level=info msg="ignoring event" container=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.681607190Z" level=info msg="ignoring event" container=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.681903492Z" level=info msg="shim disconnected" id=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.685272405Z" level=warning msg="cleaning up after shim disconnected" id=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.685407505Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.671723952Z" level=info msg="shim disconnected" id=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.693693137Z" level=warning msg="cleaning up after shim disconnected" id=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.693789338Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697563052Z" level=info msg="shim disconnected" id=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697641053Z" level=warning msg="cleaning up after shim disconnected" id=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697654453Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.725345060Z" level=info msg="ignoring event" container=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.725697262Z" level=info msg="shim disconnected" id=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.725980963Z" level=warning msg="cleaning up after shim disconnected" id=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.726206964Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.734018694Z" level=info msg="ignoring event" container=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.736798905Z" level=info msg="shim disconnected" id=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.737017505Z" level=warning msg="cleaning up after shim disconnected" id=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.737255906Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:32 functional-618200 dockerd[1456]: time="2025-04-08T23:09:32.552363388Z" level=info msg="ignoring event" container=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556138103Z" level=info msg="shim disconnected" id=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c namespace=moby
	Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556756905Z" level=warning msg="cleaning up after shim disconnected" id=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c namespace=moby
	Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556921006Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.565876302Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.643029581Z" level=info msg="ignoring event" container=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.646699056Z" level=info msg="shim disconnected" id=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.647140153Z" level=warning msg="cleaning up after shim disconnected" id=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.647214253Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724363532Z" level=info msg="Daemon shutdown complete"
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724563130Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724658330Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724794029Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 08 23:09:38 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	Apr 08 23:09:38 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:09:38 functional-618200 systemd[1]: docker.service: Consumed 4.925s CPU time.
	Apr 08 23:09:38 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:09:38 functional-618200 dockerd[3978]: time="2025-04-08T23:09:38.782261701Z" level=info msg="Starting up"
	Apr 08 23:10:38 functional-618200 dockerd[3978]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:10:38 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:10:38 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:10:38 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 08 23:06:49 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.094333857Z" level=info msg="Starting up"
	Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.095749501Z" level=info msg="containerd not running, starting managed containerd"
	Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.097506580Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=673
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.128963677Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152469766Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152558876Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152717392Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152739794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152812201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152901110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153079328Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153169038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153187739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153197940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153293950Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153812303Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156561482Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156716198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156848512Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156952822Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.157044531Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.157169744Z" level=info msg="metadata content store policy set" policy=shared
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190389421Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190521734Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190544737Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190560338Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190576740Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190838067Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191154799Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191361820Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191472031Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191493633Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191512135Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191527737Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191541238Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191555639Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191571341Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191603144Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191615846Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191628447Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191749659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191774162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191800364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191815666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191830867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191844669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191857670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191870171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191882273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191897274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191908775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191920677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191932778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191947379Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191967081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191979383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191992484Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192114796Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192196605Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192262611Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192291214Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192304416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192318917Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192331918Z" level=info msg="NRI interface is disabled by configuration."
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193151202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193285015Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193371424Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193820570Z" level=info msg="containerd successfully booted in 0.066941s"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.170474987Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.203429127Z" level=info msg="Loading containers: start."
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.350665658Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.583414712Z" level=info msg="Loading containers: done."
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.608611503Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.608776419Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.609056647Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.609260067Z" level=info msg="Daemon has completed initialization"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.713909013Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.714066029Z" level=info msg="API listen on [::]:2376"
	Apr 08 23:06:50 functional-618200 systemd[1]: Started Docker Application Container Engine.
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.811241096Z" level=info msg="Processing signal 'terminated'"
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813084503Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813257403Z" level=info msg="Daemon shutdown complete"
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813288003Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813374004Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 08 23:07:20 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	Apr 08 23:07:21 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	Apr 08 23:07:21 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:07:21 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.861204748Z" level=info msg="Starting up"
	Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.863521556Z" level=info msg="containerd not running, starting managed containerd"
	Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.864856161Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1097
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.891008554Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913514335Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913559535Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913591835Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913605435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913626835Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913637435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913748735Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913963436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913985636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913996836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.914019636Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.914159537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.916995847Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917087147Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917210048Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917295148Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917328148Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917346448Z" level=info msg="metadata content store policy set" policy=shared
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917634649Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917741950Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917760750Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917900050Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917914850Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917957150Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918196151Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918327452Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918413452Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918430852Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918442352Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918453152Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918462452Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918473352Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918484552Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918499152Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918509952Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918520052Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918543853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918558553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918568953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918579553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918589553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918609253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918626253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918638253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918657853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918673253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918682953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918692253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918702953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918715553Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918733953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918744753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918754653Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918959554Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919161355Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919325455Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919361655Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919372055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919407356Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919416356Z" level=info msg="NRI interface is disabled by configuration."
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919735157Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919968758Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.920117658Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.920171758Z" level=info msg="containerd successfully booted in 0.029982s"
	Apr 08 23:07:22 functional-618200 dockerd[1091]: time="2025-04-08T23:07:22.908709690Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 08 23:07:22 functional-618200 dockerd[1091]: time="2025-04-08T23:07:22.934950284Z" level=info msg="Loading containers: start."
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.062615440Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.175164242Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.282062124Z" level=info msg="Loading containers: done."
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.305666909Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.305777709Z" level=info msg="Daemon has completed initialization"
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.341856738Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 08 23:07:23 functional-618200 systemd[1]: Started Docker Application Container Engine.
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.343491744Z" level=info msg="API listen on [::]:2376"
	Apr 08 23:07:32 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.905143108Z" level=info msg="Processing signal 'terminated'"
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906371813Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906906114Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.907033815Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906918515Z" level=info msg="Daemon shutdown complete"
	Apr 08 23:07:33 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	Apr 08 23:07:33 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:07:33 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.955484761Z" level=info msg="Starting up"
	Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.957042767Z" level=info msg="containerd not running, starting managed containerd"
	Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.958462672Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1462
	Apr 08 23:07:33 functional-618200 dockerd[1462]: time="2025-04-08T23:07:33.983507761Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009132353Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009242353Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009307753Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009324953Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009354454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009383954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009545254Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009658655Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009680555Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009691855Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009717555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.010024356Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012580665Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012671765Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012945166Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013039867Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013070567Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013104967Z" level=info msg="metadata content store policy set" policy=shared
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013460968Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013562869Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013583269Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013598369Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013611569Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013659269Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014010570Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014156471Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014247371Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014266571Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014280071Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014397172Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014425272Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014441672Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014458272Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014472772Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014498972Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014515572Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014537972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014555672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014570972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014585972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014601072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014615672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014629372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014643572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014658573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014679173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014709673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014738473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014783273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014916873Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014942274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014955574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014969174Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015051774Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015092874Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015107074Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015122374Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015133174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015147174Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015158874Z" level=info msg="NRI interface is disabled by configuration."
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015573476Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015638476Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015690176Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015715476Z" level=info msg="containerd successfully booted in 0.033079s"
	Apr 08 23:07:35 functional-618200 dockerd[1456]: time="2025-04-08T23:07:35.262471031Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 08 23:07:37 functional-618200 dockerd[1456]: time="2025-04-08T23:07:37.762713164Z" level=info msg="Loading containers: start."
	Apr 08 23:07:37 functional-618200 dockerd[1456]: time="2025-04-08T23:07:37.897446846Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.015338367Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.153824862Z" level=info msg="Loading containers: done."
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.182692065Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.182937366Z" level=info msg="Daemon has completed initialization"
	Apr 08 23:07:38 functional-618200 systemd[1]: Started Docker Application Container Engine.
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.220981402Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.221045402Z" level=info msg="API listen on [::]:2376"
	Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928174323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928255628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928274329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928976471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011163114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011256119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011273420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011437330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.047888267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048098278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048281989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048657110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089143872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089470391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089714404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.090374541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.331240402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.331940241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.332248459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.332901095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587350115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587733437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587951349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.588255466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643351545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643476652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643513354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643620460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681369670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681570881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681658686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.682028307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.094044455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.094486867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.095561595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.097530446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394114311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394433319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394665025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.395349443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643182806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643370211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643392711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.645053352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216296816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216387017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216402117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216977424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.540620784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.540963288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.541044989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.541180590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.848480641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.850292361Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.850566464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.851150170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.385762643Z" level=info msg="Processing signal 'terminated'"
	Apr 08 23:09:27 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574335274Z" level=info msg="shim disconnected" id=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574507675Z" level=warning msg="cleaning up after shim disconnected" id=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574520575Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.575374478Z" level=info msg="ignoring event" container=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.602965785Z" level=info msg="ignoring event" container=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.603895489Z" level=info msg="shim disconnected" id=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.604175090Z" level=warning msg="cleaning up after shim disconnected" id=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.604242890Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614380530Z" level=info msg="shim disconnected" id=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614605231Z" level=warning msg="cleaning up after shim disconnected" id=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614742231Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.620402053Z" level=info msg="ignoring event" container=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.620802455Z" level=info msg="shim disconnected" id=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.621015255Z" level=info msg="ignoring event" container=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.621947059Z" level=info msg="ignoring event" container=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.622304660Z" level=info msg="ignoring event" container=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622827062Z" level=warning msg="cleaning up after shim disconnected" id=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.623203064Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622314560Z" level=info msg="shim disconnected" id=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.624293868Z" level=warning msg="cleaning up after shim disconnected" id=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.624306868Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622381461Z" level=info msg="shim disconnected" id=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.631193795Z" level=warning msg="cleaning up after shim disconnected" id=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.631249695Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.667400535Z" level=info msg="ignoring event" container=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.669623644Z" level=info msg="shim disconnected" id=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.672188454Z" level=warning msg="cleaning up after shim disconnected" id=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.672924657Z" level=info msg="ignoring event" container=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.673767960Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.681394990Z" level=info msg="ignoring event" container=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.681607190Z" level=info msg="ignoring event" container=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.681903492Z" level=info msg="shim disconnected" id=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.685272405Z" level=warning msg="cleaning up after shim disconnected" id=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.685407505Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.671723952Z" level=info msg="shim disconnected" id=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.693693137Z" level=warning msg="cleaning up after shim disconnected" id=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.693789338Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697563052Z" level=info msg="shim disconnected" id=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697641053Z" level=warning msg="cleaning up after shim disconnected" id=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697654453Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.725345060Z" level=info msg="ignoring event" container=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.725697262Z" level=info msg="shim disconnected" id=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.725980963Z" level=warning msg="cleaning up after shim disconnected" id=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.726206964Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.734018694Z" level=info msg="ignoring event" container=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.736798905Z" level=info msg="shim disconnected" id=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.737017505Z" level=warning msg="cleaning up after shim disconnected" id=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.737255906Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:32 functional-618200 dockerd[1456]: time="2025-04-08T23:09:32.552363388Z" level=info msg="ignoring event" container=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556138103Z" level=info msg="shim disconnected" id=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c namespace=moby
	Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556756905Z" level=warning msg="cleaning up after shim disconnected" id=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c namespace=moby
	Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556921006Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.565876302Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.643029581Z" level=info msg="ignoring event" container=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.646699056Z" level=info msg="shim disconnected" id=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.647140153Z" level=warning msg="cleaning up after shim disconnected" id=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.647214253Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724363532Z" level=info msg="Daemon shutdown complete"
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724563130Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724658330Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724794029Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 08 23:09:38 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	Apr 08 23:09:38 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:09:38 functional-618200 systemd[1]: docker.service: Consumed 4.925s CPU time.
	Apr 08 23:09:38 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:09:38 functional-618200 dockerd[3978]: time="2025-04-08T23:09:38.782261701Z" level=info msg="Starting up"
	Apr 08 23:10:38 functional-618200 dockerd[3978]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:10:38 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:10:38 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:10:38 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0408 23:10:38.868272   12728 out.go:270] * 
	* 
	W0408 23:10:38.869805   12728 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 23:10:38.876775   12728 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:678: failed to soft start minikube. args "out/minikube-windows-amd64.exe start -p functional-618200 --alsologtostderr -v=8": exit status 90
functional_test.go:680: soft start took 2m30.3985712s for "functional-618200" cluster.
I0408 23:10:39.528316    9864 config.go:182] Loaded profile config "functional-618200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-618200 -n functional-618200
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-618200 -n functional-618200: exit status 2 (11.7494306s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/SoftStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-618200 logs -n 25
E0408 23:13:10.445168    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-618200 logs -n 25: (2m48.3980433s)
helpers_test.go:252: TestFunctional/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                 Args                                  |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| ip      | addons-582000 ip                                                      | addons-582000     | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:56 UTC | 08 Apr 25 22:56 UTC |
	| addons  | addons-582000 addons disable                                          | addons-582000     | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:56 UTC | 08 Apr 25 22:56 UTC |
	|         | ingress-dns --alsologtostderr                                         |                   |                   |         |                     |                     |
	|         | -v=1                                                                  |                   |                   |         |                     |                     |
	| addons  | addons-582000 addons                                                  | addons-582000     | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:56 UTC | 08 Apr 25 22:56 UTC |
	|         | disable csi-hostpath-driver                                           |                   |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                |                   |                   |         |                     |                     |
	| addons  | addons-582000 addons disable                                          | addons-582000     | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:56 UTC | 08 Apr 25 22:57 UTC |
	|         | ingress --alsologtostderr -v=1                                        |                   |                   |         |                     |                     |
	| stop    | -p addons-582000                                                      | addons-582000     | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:57 UTC | 08 Apr 25 22:57 UTC |
	| addons  | enable dashboard -p                                                   | addons-582000     | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:57 UTC | 08 Apr 25 22:57 UTC |
	|         | addons-582000                                                         |                   |                   |         |                     |                     |
	| addons  | disable dashboard -p                                                  | addons-582000     | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:57 UTC | 08 Apr 25 22:57 UTC |
	|         | addons-582000                                                         |                   |                   |         |                     |                     |
	| addons  | disable gvisor -p                                                     | addons-582000     | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:57 UTC | 08 Apr 25 22:57 UTC |
	|         | addons-582000                                                         |                   |                   |         |                     |                     |
	| delete  | -p addons-582000                                                      | addons-582000     | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:57 UTC | 08 Apr 25 22:58 UTC |
	| start   | -p nospam-268300 -n=1 --memory=2250 --wait=false                      | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:58 UTC | 08 Apr 25 23:01 UTC |
	|         | --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 |                   |                   |         |                     |                     |
	|         | --driver=hyperv                                                       |                   |                   |         |                     |                     |
	| start   | nospam-268300 --log_dir                                               | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:01 UTC |                     |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300           |                   |                   |         |                     |                     |
	|         | start --dry-run                                                       |                   |                   |         |                     |                     |
	| start   | nospam-268300 --log_dir                                               | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:01 UTC |                     |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300           |                   |                   |         |                     |                     |
	|         | start --dry-run                                                       |                   |                   |         |                     |                     |
	| start   | nospam-268300 --log_dir                                               | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:02 UTC |                     |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300           |                   |                   |         |                     |                     |
	|         | start --dry-run                                                       |                   |                   |         |                     |                     |
	| pause   | nospam-268300 --log_dir                                               | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:02 UTC | 08 Apr 25 23:02 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300           |                   |                   |         |                     |                     |
	|         | pause                                                                 |                   |                   |         |                     |                     |
	| pause   | nospam-268300 --log_dir                                               | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:02 UTC | 08 Apr 25 23:02 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300           |                   |                   |         |                     |                     |
	|         | pause                                                                 |                   |                   |         |                     |                     |
	| pause   | nospam-268300 --log_dir                                               | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:02 UTC | 08 Apr 25 23:03 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300           |                   |                   |         |                     |                     |
	|         | pause                                                                 |                   |                   |         |                     |                     |
	| unpause | nospam-268300 --log_dir                                               | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:03 UTC | 08 Apr 25 23:03 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300           |                   |                   |         |                     |                     |
	|         | unpause                                                               |                   |                   |         |                     |                     |
	| unpause | nospam-268300 --log_dir                                               | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:03 UTC | 08 Apr 25 23:03 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300           |                   |                   |         |                     |                     |
	|         | unpause                                                               |                   |                   |         |                     |                     |
	| unpause | nospam-268300 --log_dir                                               | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:03 UTC | 08 Apr 25 23:03 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300           |                   |                   |         |                     |                     |
	|         | unpause                                                               |                   |                   |         |                     |                     |
	| stop    | nospam-268300 --log_dir                                               | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:03 UTC | 08 Apr 25 23:04 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300           |                   |                   |         |                     |                     |
	|         | stop                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-268300 --log_dir                                               | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:04 UTC | 08 Apr 25 23:04 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300           |                   |                   |         |                     |                     |
	|         | stop                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-268300 --log_dir                                               | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:04 UTC | 08 Apr 25 23:04 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300           |                   |                   |         |                     |                     |
	|         | stop                                                                  |                   |                   |         |                     |                     |
	| delete  | -p nospam-268300                                                      | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:04 UTC | 08 Apr 25 23:04 UTC |
	| start   | -p functional-618200                                                  | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:04 UTC | 08 Apr 25 23:08 UTC |
	|         | --memory=4000                                                         |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                                 |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                            |                   |                   |         |                     |                     |
	| start   | -p functional-618200                                                  | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:08 UTC |                     |
	|         | --alsologtostderr -v=8                                                |                   |                   |         |                     |                     |
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/08 23:08:09
	Running on machine: minikube6
	Binary: Built with gc go1.24.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 23:08:09.246712   12728 out.go:345] Setting OutFile to fd 812 ...
	I0408 23:08:09.325819   12728 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 23:08:09.325819   12728 out.go:358] Setting ErrFile to fd 1352...
	I0408 23:08:09.325819   12728 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 23:08:09.346759   12728 out.go:352] Setting JSON to false
	I0408 23:08:09.349936   12728 start.go:129] hostinfo: {"hostname":"minikube6","uptime":10687,"bootTime":1744143002,"procs":176,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5679 Build 19045.5679","kernelVersion":"10.0.19045.5679 Build 19045.5679","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0408 23:08:09.349936   12728 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 23:08:09.354680   12728 out.go:177] * [functional-618200] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
	I0408 23:08:09.360335   12728 notify.go:220] Checking for updates...
	I0408 23:08:09.363251   12728 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0408 23:08:09.365934   12728 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 23:08:09.370015   12728 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0408 23:08:09.372261   12728 out.go:177]   - MINIKUBE_LOCATION=20501
	I0408 23:08:09.376217   12728 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 23:08:09.380199   12728 config.go:182] Loaded profile config "functional-618200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0408 23:08:09.380595   12728 driver.go:404] Setting default libvirt URI to qemu:///system
	I0408 23:08:14.781214   12728 out.go:177] * Using the hyperv driver based on existing profile
	I0408 23:08:14.787195   12728 start.go:297] selected driver: hyperv
	I0408 23:08:14.787195   12728 start.go:901] validating driver "hyperv" against &{Name:functional-618200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 Clust
erName:functional-618200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.113.37 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 23:08:14.788108   12728 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 23:08:14.840719   12728 cni.go:84] Creating CNI manager for ""
	I0408 23:08:14.840719   12728 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 23:08:14.840719   12728 start.go:340] cluster config:
	{Name:functional-618200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-618200 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.113.37 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 23:08:14.840719   12728 iso.go:125] acquiring lock: {Name:mk49322cc4182124f5e9cd1631076166a7ff463d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 23:08:14.844868   12728 out.go:177] * Starting "functional-618200" primary control-plane node in "functional-618200" cluster
	I0408 23:08:14.847279   12728 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0408 23:08:14.847279   12728 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
	I0408 23:08:14.847279   12728 cache.go:56] Caching tarball of preloaded images
	I0408 23:08:14.847279   12728 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0408 23:08:14.847279   12728 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0408 23:08:14.848442   12728 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-618200\config.json ...
	I0408 23:08:14.850635   12728 start.go:360] acquireMachinesLock for functional-618200: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 23:08:14.850635   12728 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-618200"
	I0408 23:08:14.851114   12728 start.go:96] Skipping create...Using existing machine configuration
	I0408 23:08:14.851183   12728 fix.go:54] fixHost starting: 
	I0408 23:08:14.851361   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:17.635558   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:17.636077   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:17.636077   12728 fix.go:112] recreateIfNeeded on functional-618200: state=Running err=<nil>
	W0408 23:08:17.636077   12728 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 23:08:17.641199   12728 out.go:177] * Updating the running hyperv "functional-618200" VM ...
	I0408 23:08:17.643270   12728 machine.go:93] provisionDockerMachine start ...
	I0408 23:08:17.643828   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:19.832353   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:19.832353   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:19.833486   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:08:22.348787   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:08:22.348787   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:22.354331   12728 main.go:141] libmachine: Using SSH client type: native
	I0408 23:08:22.354942   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1427d00] 0x142a840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:08:22.354942   12728 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 23:08:22.482052   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-618200
	
	I0408 23:08:22.482109   12728 buildroot.go:166] provisioning hostname "functional-618200"
	I0408 23:08:22.482218   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:24.614743   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:24.615199   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:24.615199   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:08:27.116022   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:08:27.116669   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:27.122660   12728 main.go:141] libmachine: Using SSH client type: native
	I0408 23:08:27.122837   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1427d00] 0x142a840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:08:27.122837   12728 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-618200 && echo "functional-618200" | sudo tee /etc/hostname
	I0408 23:08:27.296048   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-618200
	
	I0408 23:08:27.296048   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:29.515938   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:29.516732   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:29.516860   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:08:32.104430   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:08:32.104430   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:32.111087   12728 main.go:141] libmachine: Using SSH client type: native
	I0408 23:08:32.111822   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1427d00] 0x142a840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:08:32.111822   12728 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-618200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-618200/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-618200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 23:08:32.239307   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 23:08:32.239307   12728 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0408 23:08:32.239307   12728 buildroot.go:174] setting up certificates
	I0408 23:08:32.239307   12728 provision.go:84] configureAuth start
	I0408 23:08:32.239907   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:34.375660   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:34.376637   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:34.376637   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:08:36.940152   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:08:36.940811   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:36.940910   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:39.102003   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:39.102003   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:39.102003   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:08:41.651752   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:08:41.651752   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:41.651752   12728 provision.go:143] copyHostCerts
	I0408 23:08:41.652744   12728 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0408 23:08:41.653241   12728 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0408 23:08:41.653241   12728 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0408 23:08:41.653897   12728 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0408 23:08:41.655530   12728 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0408 23:08:41.655919   12728 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0408 23:08:41.655919   12728 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0408 23:08:41.656607   12728 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0408 23:08:41.657919   12728 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0408 23:08:41.658240   12728 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0408 23:08:41.658370   12728 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0408 23:08:41.658791   12728 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0408 23:08:41.659993   12728 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-618200 san=[127.0.0.1 192.168.113.37 functional-618200 localhost minikube]
	I0408 23:08:41.724180   12728 provision.go:177] copyRemoteCerts
	I0408 23:08:41.734528   12728 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 23:08:41.734661   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:43.857555   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:43.858453   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:43.858453   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:08:46.376433   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:08:46.376433   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:46.376862   12728 sshutil.go:53] new ssh client: &{IP:192.168.113.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-618200\id_rsa Username:docker}
	I0408 23:08:46.479933   12728 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7452489s)
	I0408 23:08:46.479933   12728 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0408 23:08:46.480251   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0408 23:08:46.526275   12728 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0408 23:08:46.526275   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0408 23:08:46.571513   12728 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0408 23:08:46.571513   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 23:08:46.618636   12728 provision.go:87] duration metric: took 14.3791442s to configureAuth
	I0408 23:08:46.618636   12728 buildroot.go:189] setting minikube options for container-runtime
	I0408 23:08:46.619360   12728 config.go:182] Loaded profile config "functional-618200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0408 23:08:46.619360   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:48.759145   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:48.759997   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:48.760072   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:08:51.352431   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:08:51.352840   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:51.358422   12728 main.go:141] libmachine: Using SSH client type: native
	I0408 23:08:51.359181   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1427d00] 0x142a840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:08:51.359181   12728 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0408 23:08:51.498239   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0408 23:08:51.498239   12728 buildroot.go:70] root file system type: tmpfs
	I0408 23:08:51.499500   12728 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0408 23:08:51.499565   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:53.639609   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:53.639609   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:53.639706   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:08:56.165286   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:08:56.165286   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:56.172269   12728 main.go:141] libmachine: Using SSH client type: native
	I0408 23:08:56.172483   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1427d00] 0x142a840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:08:56.172483   12728 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0408 23:08:56.329047   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0408 23:08:56.329209   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:58.408221   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:58.408271   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:58.408271   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:09:00.972449   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:09:00.972449   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:00.978298   12728 main.go:141] libmachine: Using SSH client type: native
	I0408 23:09:00.979066   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1427d00] 0x142a840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:09:00.979150   12728 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0408 23:09:01.120743   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 23:09:01.120743   12728 machine.go:96] duration metric: took 43.4763536s to provisionDockerMachine
	I0408 23:09:01.120743   12728 start.go:293] postStartSetup for "functional-618200" (driver="hyperv")
	I0408 23:09:01.120743   12728 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 23:09:01.134465   12728 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 23:09:01.134586   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:09:03.239597   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:09:03.239597   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:03.240300   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:09:05.769173   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:09:05.769791   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:05.769977   12728 sshutil.go:53] new ssh client: &{IP:192.168.113.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-618200\id_rsa Username:docker}
	I0408 23:09:05.882717   12728 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7480703s)
	I0408 23:09:05.895357   12728 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 23:09:05.906701   12728 command_runner.go:130] > NAME=Buildroot
	I0408 23:09:05.906871   12728 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0408 23:09:05.906871   12728 command_runner.go:130] > ID=buildroot
	I0408 23:09:05.906871   12728 command_runner.go:130] > VERSION_ID=2023.02.9
	I0408 23:09:05.906871   12728 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0408 23:09:05.906871   12728 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 23:09:05.906871   12728 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0408 23:09:05.907746   12728 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0408 23:09:05.909230   12728 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> 98642.pem in /etc/ssl/certs
	I0408 23:09:05.909297   12728 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> /etc/ssl/certs/98642.pem
	I0408 23:09:05.909974   12728 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9864\hosts -> hosts in /etc/test/nested/copy/9864
	I0408 23:09:05.909974   12728 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9864\hosts -> /etc/test/nested/copy/9864/hosts
	I0408 23:09:05.922022   12728 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/9864
	I0408 23:09:05.940207   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem --> /etc/ssl/certs/98642.pem (1708 bytes)
	I0408 23:09:05.986656   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9864\hosts --> /etc/test/nested/copy/9864/hosts (40 bytes)
	I0408 23:09:06.037448   12728 start.go:296] duration metric: took 4.9164478s for postStartSetup
	I0408 23:09:06.037545   12728 fix.go:56] duration metric: took 51.1857011s for fixHost
	I0408 23:09:06.037624   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:09:08.158094   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:09:08.158094   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:08.158094   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:09:10.681527   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:09:10.681527   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:10.688411   12728 main.go:141] libmachine: Using SSH client type: native
	I0408 23:09:10.689102   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1427d00] 0x142a840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:09:10.689245   12728 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0408 23:09:10.829582   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744153750.860325411
	
	I0408 23:09:10.829582   12728 fix.go:216] guest clock: 1744153750.860325411
	I0408 23:09:10.829683   12728 fix.go:229] Guest: 2025-04-08 23:09:10.860325411 +0000 UTC Remote: 2025-04-08 23:09:06.0375451 +0000 UTC m=+56.890513901 (delta=4.822780311s)
	I0408 23:09:10.829858   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:09:12.957017   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:09:12.957017   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:12.957017   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:09:15.521412   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:09:15.521412   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:15.527916   12728 main.go:141] libmachine: Using SSH client type: native
	I0408 23:09:15.528634   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1427d00] 0x142a840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:09:15.528634   12728 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1744153750
	I0408 23:09:15.671072   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr  8 23:09:10 UTC 2025
	
	I0408 23:09:15.671072   12728 fix.go:236] clock set: Tue Apr  8 23:09:10 UTC 2025
	 (err=<nil>)
	I0408 23:09:15.671072   12728 start.go:83] releasing machines lock for "functional-618200", held for 1m0.8196519s
	I0408 23:09:15.671072   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:09:17.795924   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:09:17.795924   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:17.795924   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:09:20.343976   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:09:20.344152   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:20.347691   12728 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0408 23:09:20.347691   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:09:20.358515   12728 ssh_runner.go:195] Run: cat /version.json
	I0408 23:09:20.358515   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:09:22.544260   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:09:22.544260   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:22.544260   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:09:22.547450   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:09:22.547450   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:22.547565   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:09:25.306292   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:09:25.306292   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:25.306292   12728 sshutil.go:53] new ssh client: &{IP:192.168.113.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-618200\id_rsa Username:docker}
	I0408 23:09:25.329784   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:09:25.330858   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:25.330972   12728 sshutil.go:53] new ssh client: &{IP:192.168.113.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-618200\id_rsa Username:docker}
	I0408 23:09:25.407167   12728 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0408 23:09:25.407167   12728 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.0594111s)
	W0408 23:09:25.407380   12728 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0408 23:09:25.427823   12728 command_runner.go:130] > {"iso_version": "v1.35.0", "kicbase_version": "v0.0.45-1736763277-20236", "minikube_version": "v1.35.0", "commit": "3fb24bd87c8c8761e2515e1a9ee13835a389ed68"}
	I0408 23:09:25.427823   12728 ssh_runner.go:235] Completed: cat /version.json: (5.0692422s)
	I0408 23:09:25.441651   12728 ssh_runner.go:195] Run: systemctl --version
	I0408 23:09:25.452009   12728 command_runner.go:130] > systemd 252 (252)
	I0408 23:09:25.452009   12728 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0408 23:09:25.462226   12728 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0408 23:09:25.470182   12728 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0408 23:09:25.470647   12728 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 23:09:25.483329   12728 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 23:09:25.504611   12728 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0408 23:09:25.504611   12728 start.go:495] detecting cgroup driver to use...
	I0408 23:09:25.505055   12728 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0408 23:09:25.518103   12728 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0408 23:09:25.518165   12728 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0408 23:09:25.545691   12728 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0408 23:09:25.557677   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0408 23:09:25.585837   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0408 23:09:25.605727   12728 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0408 23:09:25.616269   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0408 23:09:25.648654   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 23:09:25.682043   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0408 23:09:25.712502   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 23:09:25.745703   12728 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 23:09:25.776089   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0408 23:09:25.813738   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0408 23:09:25.847440   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0408 23:09:25.878964   12728 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 23:09:25.897917   12728 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0408 23:09:25.910039   12728 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 23:09:25.937635   12728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:09:26.191579   12728 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0408 23:09:26.223263   12728 start.go:495] detecting cgroup driver to use...
	I0408 23:09:26.235750   12728 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0408 23:09:26.260048   12728 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0408 23:09:26.260125   12728 command_runner.go:130] > [Unit]
	I0408 23:09:26.260125   12728 command_runner.go:130] > Description=Docker Application Container Engine
	I0408 23:09:26.260125   12728 command_runner.go:130] > Documentation=https://docs.docker.com
	I0408 23:09:26.260200   12728 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0408 23:09:26.260200   12728 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0408 23:09:26.260200   12728 command_runner.go:130] > StartLimitBurst=3
	I0408 23:09:26.260200   12728 command_runner.go:130] > StartLimitIntervalSec=60
	I0408 23:09:26.260200   12728 command_runner.go:130] > [Service]
	I0408 23:09:26.260200   12728 command_runner.go:130] > Type=notify
	I0408 23:09:26.260200   12728 command_runner.go:130] > Restart=on-failure
	I0408 23:09:26.260338   12728 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0408 23:09:26.260338   12728 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0408 23:09:26.260338   12728 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0408 23:09:26.260338   12728 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0408 23:09:26.260338   12728 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0408 23:09:26.260472   12728 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0408 23:09:26.260472   12728 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0408 23:09:26.260472   12728 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0408 23:09:26.260472   12728 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0408 23:09:26.260472   12728 command_runner.go:130] > ExecStart=
	I0408 23:09:26.260472   12728 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0408 23:09:26.260581   12728 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0408 23:09:26.260581   12728 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0408 23:09:26.260581   12728 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0408 23:09:26.260581   12728 command_runner.go:130] > LimitNOFILE=infinity
	I0408 23:09:26.260678   12728 command_runner.go:130] > LimitNPROC=infinity
	I0408 23:09:26.260707   12728 command_runner.go:130] > LimitCORE=infinity
	I0408 23:09:26.260707   12728 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0408 23:09:26.260707   12728 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0408 23:09:26.260764   12728 command_runner.go:130] > TasksMax=infinity
	I0408 23:09:26.260764   12728 command_runner.go:130] > TimeoutStartSec=0
	I0408 23:09:26.260764   12728 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0408 23:09:26.260764   12728 command_runner.go:130] > Delegate=yes
	I0408 23:09:26.260802   12728 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0408 23:09:26.260802   12728 command_runner.go:130] > KillMode=process
	I0408 23:09:26.260847   12728 command_runner.go:130] > [Install]
	I0408 23:09:26.260847   12728 command_runner.go:130] > WantedBy=multi-user.target
	I0408 23:09:26.272013   12728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 23:09:26.309047   12728 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 23:09:26.364238   12728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 23:09:26.397809   12728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0408 23:09:26.420470   12728 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 23:09:26.452776   12728 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0408 23:09:26.465171   12728 ssh_runner.go:195] Run: which cri-dockerd
	I0408 23:09:26.471612   12728 command_runner.go:130] > /usr/bin/cri-dockerd
	I0408 23:09:26.483601   12728 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0408 23:09:26.500243   12728 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0408 23:09:26.541951   12728 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0408 23:09:26.818543   12728 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0408 23:09:27.059393   12728 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0408 23:09:27.059393   12728 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0408 23:09:27.105693   12728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:09:27.332438   12728 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0408 23:10:38.780025   12728 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0408 23:10:38.780100   12728 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0408 23:10:38.783775   12728 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.4502693s)
	I0408 23:10:38.797107   12728 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0408 23:10:38.826638   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	I0408 23:10:38.826758   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.094333857Z" level=info msg="Starting up"
	I0408 23:10:38.826758   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.095749501Z" level=info msg="containerd not running, starting managed containerd"
	I0408 23:10:38.826815   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.097506580Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=673
	I0408 23:10:38.826815   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.128963677Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0408 23:10:38.826815   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152469766Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0408 23:10:38.826815   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152558876Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0408 23:10:38.826815   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152717392Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0408 23:10:38.826815   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152739794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.827006   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152812201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.827006   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152901110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.827074   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153079328Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.827097   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153169038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.827157   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153187739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.827181   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153197940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.827181   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153293950Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.827260   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153812303Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.827260   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156561482Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.827340   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156716198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156848512Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156952822Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.157044531Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.157169744Z" level=info msg="metadata content store policy set" policy=shared
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190389421Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190521734Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190544737Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190560338Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190576740Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190838067Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191154799Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191361820Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191472031Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191493633Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191512135Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191527737Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191541238Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191555639Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191571341Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191603144Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.827985   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191615846Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.828081   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191628447Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.828188   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191749659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828234   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191774162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828234   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191800364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828234   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191815666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828308   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191830867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828308   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191844669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828356   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191857670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828356   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191870171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828426   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191882273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828426   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191897274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828489   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191908775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828524   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191920677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828613   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191932778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828649   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191947379Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0408 23:10:38.828649   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191967081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828790   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191979383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828855   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191992484Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0408 23:10:38.828855   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192114796Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0408 23:10:38.828935   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192196605Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192262611Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192291214Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192304416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192318917Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192331918Z" level=info msg="NRI interface is disabled by configuration."
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193151202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193285015Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193371424Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193820570Z" level=info msg="containerd successfully booted in 0.066941s"
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.170474987Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.203429127Z" level=info msg="Loading containers: start."
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.350665658Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.583414712Z" level=info msg="Loading containers: done."
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.608611503Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.608776419Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.609056647Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.609260067Z" level=info msg="Daemon has completed initialization"
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.713909013Z" level=info msg="API listen on /var/run/docker.sock"
	I0408 23:10:38.829565   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.714066029Z" level=info msg="API listen on [::]:2376"
	I0408 23:10:38.829565   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 systemd[1]: Started Docker Application Container Engine.
	I0408 23:10:38.829625   12728 command_runner.go:130] > Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.811241096Z" level=info msg="Processing signal 'terminated'"
	I0408 23:10:38.829625   12728 command_runner.go:130] > Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813084503Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0408 23:10:38.829625   12728 command_runner.go:130] > Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813257403Z" level=info msg="Daemon shutdown complete"
	I0408 23:10:38.829625   12728 command_runner.go:130] > Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813288003Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0408 23:10:38.829753   12728 command_runner.go:130] > Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813374004Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0408 23:10:38.829753   12728 command_runner.go:130] > Apr 08 23:07:20 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	I0408 23:10:38.829790   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	I0408 23:10:38.829942   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	I0408 23:10:38.829942   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	I0408 23:10:38.829942   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.861204748Z" level=info msg="Starting up"
	I0408 23:10:38.830042   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.863521556Z" level=info msg="containerd not running, starting managed containerd"
	I0408 23:10:38.830042   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.864856161Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1097
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.891008554Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913514335Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913559535Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913591835Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913605435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913626835Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913637435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913748735Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913963436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913985636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913996836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.914019636Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.914159537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.916995847Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917087147Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917210048Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917295148Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0408 23:10:38.830797   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917328148Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0408 23:10:38.830797   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917346448Z" level=info msg="metadata content store policy set" policy=shared
	I0408 23:10:38.830797   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917634649Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0408 23:10:38.830869   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917741950Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0408 23:10:38.830869   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917760750Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0408 23:10:38.830869   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917900050Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0408 23:10:38.830869   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917914850Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0408 23:10:38.830869   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917957150Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918196151Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918327452Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918413452Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918430852Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918442352Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918453152Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918462452Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918473352Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918484552Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918499152Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.831194   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918509952Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.831194   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918520052Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.831194   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918543853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831194   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918558553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831300   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918568953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831300   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918579553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831300   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918589553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831377   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918609253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831377   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918626253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831422   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918638253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831442   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918657853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831442   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918673253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831442   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918682953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918692253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918702953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918715553Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918733953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918744753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918754653Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918959554Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919161355Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919325455Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919361655Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919372055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919407356Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919416356Z" level=info msg="NRI interface is disabled by configuration."
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919735157Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919968758Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.920117658Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.920171758Z" level=info msg="containerd successfully booted in 0.029982s"
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:22 functional-618200 dockerd[1091]: time="2025-04-08T23:07:22.908709690Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:22 functional-618200 dockerd[1091]: time="2025-04-08T23:07:22.934950284Z" level=info msg="Loading containers: start."
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.062615440Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.175164242Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.282062124Z" level=info msg="Loading containers: done."
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.305666909Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.305777709Z" level=info msg="Daemon has completed initialization"
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.341856738Z" level=info msg="API listen on /var/run/docker.sock"
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:23 functional-618200 systemd[1]: Started Docker Application Container Engine.
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.343491744Z" level=info msg="API listen on [::]:2376"
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:32 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.905143108Z" level=info msg="Processing signal 'terminated'"
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906371813Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906906114Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.907033815Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906918515Z" level=info msg="Daemon shutdown complete"
	I0408 23:10:38.832201   12728 command_runner.go:130] > Apr 08 23:07:33 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	I0408 23:10:38.832201   12728 command_runner.go:130] > Apr 08 23:07:33 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	I0408 23:10:38.832201   12728 command_runner.go:130] > Apr 08 23:07:33 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	I0408 23:10:38.832201   12728 command_runner.go:130] > Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.955484761Z" level=info msg="Starting up"
	I0408 23:10:38.832201   12728 command_runner.go:130] > Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.957042767Z" level=info msg="containerd not running, starting managed containerd"
	I0408 23:10:38.832402   12728 command_runner.go:130] > Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.958462672Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1462
	I0408 23:10:38.832402   12728 command_runner.go:130] > Apr 08 23:07:33 functional-618200 dockerd[1462]: time="2025-04-08T23:07:33.983507761Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0408 23:10:38.832440   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009132353Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0408 23:10:38.832440   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009242353Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0408 23:10:38.832490   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009307753Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0408 23:10:38.832524   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009324953Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.832524   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009354454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.832569   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009383954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.832619   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009545254Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.832619   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009658655Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.832619   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009680555Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.832619   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009691855Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.832619   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009717555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.832745   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.010024356Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.832794   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012580665Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.832826   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012671765Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.832878   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012945166Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.832917   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013039867Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0408 23:10:38.832917   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013070567Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0408 23:10:38.832917   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013104967Z" level=info msg="metadata content store policy set" policy=shared
	I0408 23:10:38.832975   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013460968Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0408 23:10:38.832996   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013562869Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013583269Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013598369Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013611569Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013659269Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014010570Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014156471Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014247371Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014266571Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014280071Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014397172Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014425272Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014441672Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014458272Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014472772Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014498972Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014515572Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014537972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014555672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014570972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833567   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014585972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833567   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014601072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014615672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014629372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014643572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014658573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014679173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014709673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014738473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014783273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014916873Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014942274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014955574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014969174Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015051774Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015092874Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015107074Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015122374Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015133174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015147174Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015158874Z" level=info msg="NRI interface is disabled by configuration."
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015573476Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015638476Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015690176Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015715476Z" level=info msg="containerd successfully booted in 0.033079s"
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:35 functional-618200 dockerd[1456]: time="2025-04-08T23:07:35.262471031Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:37 functional-618200 dockerd[1456]: time="2025-04-08T23:07:37.762713164Z" level=info msg="Loading containers: start."
	I0408 23:10:38.834237   12728 command_runner.go:130] > Apr 08 23:07:37 functional-618200 dockerd[1456]: time="2025-04-08T23:07:37.897446846Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0408 23:10:38.834237   12728 command_runner.go:130] > Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.015338367Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0408 23:10:38.834237   12728 command_runner.go:130] > Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.153824862Z" level=info msg="Loading containers: done."
	I0408 23:10:38.834237   12728 command_runner.go:130] > Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.182692065Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0408 23:10:38.834237   12728 command_runner.go:130] > Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.182937366Z" level=info msg="Daemon has completed initialization"
	I0408 23:10:38.834237   12728 command_runner.go:130] > Apr 08 23:07:38 functional-618200 systemd[1]: Started Docker Application Container Engine.
	I0408 23:10:38.834237   12728 command_runner.go:130] > Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.220981402Z" level=info msg="API listen on /var/run/docker.sock"
	I0408 23:10:38.834375   12728 command_runner.go:130] > Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.221045402Z" level=info msg="API listen on [::]:2376"
	I0408 23:10:38.834375   12728 command_runner.go:130] > Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928174323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.834375   12728 command_runner.go:130] > Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928255628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.834443   12728 command_runner.go:130] > Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928274329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834443   12728 command_runner.go:130] > Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928976471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834533   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011163114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011256119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011273420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011437330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.047888267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048098278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048281989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048657110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089143872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089470391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089714404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.090374541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.331240402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.331940241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.332248459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.332901095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587350115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587733437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587951349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.588255466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643351545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643476652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643513354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835183   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643620460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835183   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681369670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.835183   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681570881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.835294   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681658686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835294   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.682028307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835294   12728 command_runner.go:130] > Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.094044455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.835373   12728 command_runner.go:130] > Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.094486867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.835373   12728 command_runner.go:130] > Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.095561595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835373   12728 command_runner.go:130] > Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.097530446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835463   12728 command_runner.go:130] > Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394114311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.835463   12728 command_runner.go:130] > Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394433319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.835567   12728 command_runner.go:130] > Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394665025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835567   12728 command_runner.go:130] > Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.395349443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835567   12728 command_runner.go:130] > Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643182806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.835567   12728 command_runner.go:130] > Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643370211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.835723   12728 command_runner.go:130] > Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643392711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835723   12728 command_runner.go:130] > Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.645053352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835723   12728 command_runner.go:130] > Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216296816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.835827   12728 command_runner.go:130] > Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216387017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.835827   12728 command_runner.go:130] > Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216402117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835887   12728 command_runner.go:130] > Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216977424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.540620784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.540963288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.541044989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.541180590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.848480641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.850292361Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.850566464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.851150170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.385762643Z" level=info msg="Processing signal 'terminated'"
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574335274Z" level=info msg="shim disconnected" id=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574507675Z" level=warning msg="cleaning up after shim disconnected" id=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574520575Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.575374478Z" level=info msg="ignoring event" container=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.602965785Z" level=info msg="ignoring event" container=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.603895489Z" level=info msg="shim disconnected" id=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.604175090Z" level=warning msg="cleaning up after shim disconnected" id=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.604242890Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614380530Z" level=info msg="shim disconnected" id=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614605231Z" level=warning msg="cleaning up after shim disconnected" id=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614742231Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.620402053Z" level=info msg="ignoring event" container=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.620802455Z" level=info msg="shim disconnected" id=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.621015255Z" level=info msg="ignoring event" container=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.621947059Z" level=info msg="ignoring event" container=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.622304660Z" level=info msg="ignoring event" container=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622827062Z" level=warning msg="cleaning up after shim disconnected" id=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.623203064Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622314560Z" level=info msg="shim disconnected" id=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c namespace=moby
	I0408 23:10:38.836542   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.624293868Z" level=warning msg="cleaning up after shim disconnected" id=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c namespace=moby
	I0408 23:10:38.836542   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.624306868Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.836542   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622381461Z" level=info msg="shim disconnected" id=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c namespace=moby
	I0408 23:10:38.836542   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.631193795Z" level=warning msg="cleaning up after shim disconnected" id=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.631249695Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.667400535Z" level=info msg="ignoring event" container=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.669623644Z" level=info msg="shim disconnected" id=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.672188454Z" level=warning msg="cleaning up after shim disconnected" id=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.672924657Z" level=info msg="ignoring event" container=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.673767960Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.681394990Z" level=info msg="ignoring event" container=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.681607190Z" level=info msg="ignoring event" container=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.681903492Z" level=info msg="shim disconnected" id=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.685272405Z" level=warning msg="cleaning up after shim disconnected" id=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.685407505Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.671723952Z" level=info msg="shim disconnected" id=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.693693137Z" level=warning msg="cleaning up after shim disconnected" id=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.693789338Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697563052Z" level=info msg="shim disconnected" id=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697641053Z" level=warning msg="cleaning up after shim disconnected" id=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697654453Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.725345060Z" level=info msg="ignoring event" container=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.725697262Z" level=info msg="shim disconnected" id=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa namespace=moby
	I0408 23:10:38.837349   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.725980963Z" level=warning msg="cleaning up after shim disconnected" id=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa namespace=moby
	I0408 23:10:38.837349   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.726206964Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.837349   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.734018694Z" level=info msg="ignoring event" container=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.837349   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.736798905Z" level=info msg="shim disconnected" id=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 namespace=moby
	I0408 23:10:38.837581   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.737017505Z" level=warning msg="cleaning up after shim disconnected" id=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 namespace=moby
	I0408 23:10:38.837581   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.737255906Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.837581   12728 command_runner.go:130] > Apr 08 23:09:32 functional-618200 dockerd[1456]: time="2025-04-08T23:09:32.552363388Z" level=info msg="ignoring event" container=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.837653   12728 command_runner.go:130] > Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556138103Z" level=info msg="shim disconnected" id=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c namespace=moby
	I0408 23:10:38.837653   12728 command_runner.go:130] > Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556756905Z" level=warning msg="cleaning up after shim disconnected" id=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c namespace=moby
	I0408 23:10:38.837653   12728 command_runner.go:130] > Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556921006Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.837999   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.565876302Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.643029581Z" level=info msg="ignoring event" container=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.646699056Z" level=info msg="shim disconnected" id=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f namespace=moby
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.647140153Z" level=warning msg="cleaning up after shim disconnected" id=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f namespace=moby
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.647214253Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724363532Z" level=info msg="Daemon shutdown complete"
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724563130Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724658330Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724794029Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:38 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:38 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:38 functional-618200 systemd[1]: docker.service: Consumed 4.925s CPU time.
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:38 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:38 functional-618200 dockerd[3978]: time="2025-04-08T23:09:38.782261701Z" level=info msg="Starting up"
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:10:38 functional-618200 dockerd[3978]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:10:38 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:10:38 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:10:38 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	I0408 23:10:38.863518   12728 out.go:201] 
	W0408 23:10:38.867350   12728 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 08 23:06:49 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.094333857Z" level=info msg="Starting up"
	Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.095749501Z" level=info msg="containerd not running, starting managed containerd"
	Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.097506580Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=673
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.128963677Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152469766Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152558876Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152717392Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152739794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152812201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152901110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153079328Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153169038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153187739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153197940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153293950Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153812303Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156561482Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156716198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156848512Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156952822Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.157044531Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.157169744Z" level=info msg="metadata content store policy set" policy=shared
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190389421Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190521734Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190544737Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190560338Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190576740Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190838067Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191154799Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191361820Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191472031Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191493633Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191512135Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191527737Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191541238Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191555639Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191571341Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191603144Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191615846Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191628447Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191749659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191774162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191800364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191815666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191830867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191844669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191857670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191870171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191882273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191897274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191908775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191920677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191932778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191947379Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191967081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191979383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191992484Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192114796Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192196605Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192262611Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192291214Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192304416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192318917Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192331918Z" level=info msg="NRI interface is disabled by configuration."
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193151202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193285015Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193371424Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193820570Z" level=info msg="containerd successfully booted in 0.066941s"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.170474987Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.203429127Z" level=info msg="Loading containers: start."
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.350665658Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.583414712Z" level=info msg="Loading containers: done."
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.608611503Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.608776419Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.609056647Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.609260067Z" level=info msg="Daemon has completed initialization"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.713909013Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.714066029Z" level=info msg="API listen on [::]:2376"
	Apr 08 23:06:50 functional-618200 systemd[1]: Started Docker Application Container Engine.
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.811241096Z" level=info msg="Processing signal 'terminated'"
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813084503Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813257403Z" level=info msg="Daemon shutdown complete"
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813288003Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813374004Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 08 23:07:20 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	Apr 08 23:07:21 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	Apr 08 23:07:21 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:07:21 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.861204748Z" level=info msg="Starting up"
	Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.863521556Z" level=info msg="containerd not running, starting managed containerd"
	Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.864856161Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1097
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.891008554Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913514335Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913559535Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913591835Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913605435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913626835Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913637435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913748735Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913963436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913985636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913996836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.914019636Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.914159537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.916995847Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917087147Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917210048Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917295148Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917328148Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917346448Z" level=info msg="metadata content store policy set" policy=shared
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917634649Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917741950Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917760750Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917900050Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917914850Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917957150Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918196151Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918327452Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918413452Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918430852Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918442352Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918453152Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918462452Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918473352Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918484552Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918499152Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918509952Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918520052Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918543853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918558553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918568953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918579553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918589553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918609253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918626253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918638253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918657853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918673253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918682953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918692253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918702953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918715553Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918733953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918744753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918754653Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918959554Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919161355Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919325455Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919361655Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919372055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919407356Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919416356Z" level=info msg="NRI interface is disabled by configuration."
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919735157Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919968758Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.920117658Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.920171758Z" level=info msg="containerd successfully booted in 0.029982s"
	Apr 08 23:07:22 functional-618200 dockerd[1091]: time="2025-04-08T23:07:22.908709690Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 08 23:07:22 functional-618200 dockerd[1091]: time="2025-04-08T23:07:22.934950284Z" level=info msg="Loading containers: start."
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.062615440Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.175164242Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.282062124Z" level=info msg="Loading containers: done."
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.305666909Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.305777709Z" level=info msg="Daemon has completed initialization"
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.341856738Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 08 23:07:23 functional-618200 systemd[1]: Started Docker Application Container Engine.
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.343491744Z" level=info msg="API listen on [::]:2376"
	Apr 08 23:07:32 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.905143108Z" level=info msg="Processing signal 'terminated'"
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906371813Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906906114Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.907033815Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906918515Z" level=info msg="Daemon shutdown complete"
	Apr 08 23:07:33 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	Apr 08 23:07:33 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:07:33 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.955484761Z" level=info msg="Starting up"
	Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.957042767Z" level=info msg="containerd not running, starting managed containerd"
	Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.958462672Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1462
	Apr 08 23:07:33 functional-618200 dockerd[1462]: time="2025-04-08T23:07:33.983507761Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009132353Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009242353Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009307753Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009324953Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009354454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009383954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009545254Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009658655Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009680555Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009691855Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009717555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.010024356Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012580665Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012671765Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012945166Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013039867Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013070567Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013104967Z" level=info msg="metadata content store policy set" policy=shared
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013460968Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013562869Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013583269Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013598369Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013611569Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013659269Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014010570Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014156471Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014247371Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014266571Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014280071Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014397172Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014425272Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014441672Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014458272Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014472772Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014498972Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014515572Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014537972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014555672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014570972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014585972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014601072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014615672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014629372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014643572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014658573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014679173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014709673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014738473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014783273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014916873Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014942274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014955574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014969174Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015051774Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015092874Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015107074Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015122374Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015133174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015147174Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015158874Z" level=info msg="NRI interface is disabled by configuration."
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015573476Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015638476Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015690176Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015715476Z" level=info msg="containerd successfully booted in 0.033079s"
	Apr 08 23:07:35 functional-618200 dockerd[1456]: time="2025-04-08T23:07:35.262471031Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 08 23:07:37 functional-618200 dockerd[1456]: time="2025-04-08T23:07:37.762713164Z" level=info msg="Loading containers: start."
	Apr 08 23:07:37 functional-618200 dockerd[1456]: time="2025-04-08T23:07:37.897446846Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.015338367Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.153824862Z" level=info msg="Loading containers: done."
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.182692065Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.182937366Z" level=info msg="Daemon has completed initialization"
	Apr 08 23:07:38 functional-618200 systemd[1]: Started Docker Application Container Engine.
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.220981402Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.221045402Z" level=info msg="API listen on [::]:2376"
	Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928174323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928255628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928274329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928976471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011163114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011256119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011273420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011437330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.047888267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048098278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048281989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048657110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089143872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089470391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089714404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.090374541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.331240402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.331940241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.332248459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.332901095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587350115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587733437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587951349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.588255466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643351545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643476652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643513354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643620460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681369670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681570881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681658686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.682028307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.094044455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.094486867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.095561595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.097530446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394114311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394433319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394665025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.395349443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643182806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643370211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643392711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.645053352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216296816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216387017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216402117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216977424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.540620784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.540963288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.541044989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.541180590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.848480641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.850292361Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.850566464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.851150170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.385762643Z" level=info msg="Processing signal 'terminated'"
	Apr 08 23:09:27 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574335274Z" level=info msg="shim disconnected" id=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574507675Z" level=warning msg="cleaning up after shim disconnected" id=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574520575Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.575374478Z" level=info msg="ignoring event" container=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.602965785Z" level=info msg="ignoring event" container=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.603895489Z" level=info msg="shim disconnected" id=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.604175090Z" level=warning msg="cleaning up after shim disconnected" id=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.604242890Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614380530Z" level=info msg="shim disconnected" id=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614605231Z" level=warning msg="cleaning up after shim disconnected" id=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614742231Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.620402053Z" level=info msg="ignoring event" container=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.620802455Z" level=info msg="shim disconnected" id=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.621015255Z" level=info msg="ignoring event" container=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.621947059Z" level=info msg="ignoring event" container=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.622304660Z" level=info msg="ignoring event" container=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622827062Z" level=warning msg="cleaning up after shim disconnected" id=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.623203064Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622314560Z" level=info msg="shim disconnected" id=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.624293868Z" level=warning msg="cleaning up after shim disconnected" id=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.624306868Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622381461Z" level=info msg="shim disconnected" id=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.631193795Z" level=warning msg="cleaning up after shim disconnected" id=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.631249695Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.667400535Z" level=info msg="ignoring event" container=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.669623644Z" level=info msg="shim disconnected" id=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.672188454Z" level=warning msg="cleaning up after shim disconnected" id=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.672924657Z" level=info msg="ignoring event" container=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.673767960Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.681394990Z" level=info msg="ignoring event" container=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.681607190Z" level=info msg="ignoring event" container=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.681903492Z" level=info msg="shim disconnected" id=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.685272405Z" level=warning msg="cleaning up after shim disconnected" id=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.685407505Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.671723952Z" level=info msg="shim disconnected" id=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.693693137Z" level=warning msg="cleaning up after shim disconnected" id=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.693789338Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697563052Z" level=info msg="shim disconnected" id=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697641053Z" level=warning msg="cleaning up after shim disconnected" id=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697654453Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.725345060Z" level=info msg="ignoring event" container=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.725697262Z" level=info msg="shim disconnected" id=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.725980963Z" level=warning msg="cleaning up after shim disconnected" id=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.726206964Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.734018694Z" level=info msg="ignoring event" container=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.736798905Z" level=info msg="shim disconnected" id=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.737017505Z" level=warning msg="cleaning up after shim disconnected" id=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.737255906Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:32 functional-618200 dockerd[1456]: time="2025-04-08T23:09:32.552363388Z" level=info msg="ignoring event" container=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556138103Z" level=info msg="shim disconnected" id=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c namespace=moby
	Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556756905Z" level=warning msg="cleaning up after shim disconnected" id=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c namespace=moby
	Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556921006Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.565876302Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.643029581Z" level=info msg="ignoring event" container=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.646699056Z" level=info msg="shim disconnected" id=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.647140153Z" level=warning msg="cleaning up after shim disconnected" id=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.647214253Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724363532Z" level=info msg="Daemon shutdown complete"
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724563130Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724658330Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724794029Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 08 23:09:38 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	Apr 08 23:09:38 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:09:38 functional-618200 systemd[1]: docker.service: Consumed 4.925s CPU time.
	Apr 08 23:09:38 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:09:38 functional-618200 dockerd[3978]: time="2025-04-08T23:09:38.782261701Z" level=info msg="Starting up"
	Apr 08 23:10:38 functional-618200 dockerd[3978]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:10:38 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:10:38 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:10:38 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0408 23:10:38.868272   12728 out.go:270] * 
	W0408 23:10:38.869805   12728 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 23:10:38.876775   12728 out.go:201] 
	
	
	==> Docker <==
	Apr 08 23:11:39 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:11:39 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Apr 08 23:11:39 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:11:39 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:11:39 functional-618200 dockerd[4495]: time="2025-04-08T23:11:39.240374985Z" level=info msg="Starting up"
	Apr 08 23:12:39 functional-618200 dockerd[4495]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:12:39 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:12:39 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:12:39 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:12:39 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:12:39Z" level=error msg="error getting RW layer size for container ID 'd4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:12:39 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:12:39Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61'"
	Apr 08 23:12:39 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:12:39Z" level=error msg="error getting RW layer size for container ID 'a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:12:39 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:12:39Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa'"
	Apr 08 23:12:39 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:12:39Z" level=error msg="error getting RW layer size for container ID 'b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:12:39 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:12:39Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc'"
	Apr 08 23:12:39 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:12:39Z" level=error msg="error getting RW layer size for container ID '48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:12:39 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:12:39Z" level=error msg="Set backoffDuration to : 1m0s for container ID '48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245'"
	Apr 08 23:12:39 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:12:39Z" level=error msg="error getting RW layer size for container ID 'e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:12:39 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:12:39Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c'"
	Apr 08 23:12:39 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:12:39Z" level=error msg="error getting RW layer size for container ID 'bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:12:39 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:12:39Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f'"
	Apr 08 23:12:39 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:12:39Z" level=error msg="Unable to get docker version: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:12:39 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:12:39Z" level=error msg="error getting RW layer size for container ID 'd1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:12:39 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:12:39Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee'"
	Apr 08 23:12:39 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:12:39Z" level=error msg="error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2025-04-08T23:12:41Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr 8 23:07] systemd-fstab-generator[1018]: Ignoring "noauto" option for root device
	[  +0.094636] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.537665] systemd-fstab-generator[1057]: Ignoring "noauto" option for root device
	[  +0.198814] systemd-fstab-generator[1069]: Ignoring "noauto" option for root device
	[  +0.229826] systemd-fstab-generator[1083]: Ignoring "noauto" option for root device
	[  +2.846583] systemd-fstab-generator[1309]: Ignoring "noauto" option for root device
	[  +0.173620] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	[  +0.175758] systemd-fstab-generator[1333]: Ignoring "noauto" option for root device
	[  +0.246052] systemd-fstab-generator[1348]: Ignoring "noauto" option for root device
	[  +8.663048] systemd-fstab-generator[1449]: Ignoring "noauto" option for root device
	[  +0.103326] kauditd_printk_skb: 206 callbacks suppressed
	[  +5.045655] kauditd_printk_skb: 24 callbacks suppressed
	[  +0.759487] systemd-fstab-generator[1705]: Ignoring "noauto" option for root device
	[  +6.800944] systemd-fstab-generator[1860]: Ignoring "noauto" option for root device
	[  +0.086630] kauditd_printk_skb: 40 callbacks suppressed
	[  +8.016757] systemd-fstab-generator[2285]: Ignoring "noauto" option for root device
	[  +0.140038] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.396453] systemd-fstab-generator[2387]: Ignoring "noauto" option for root device
	[  +0.210902] kauditd_printk_skb: 12 callbacks suppressed
	[Apr 8 23:08] kauditd_printk_skb: 71 callbacks suppressed
	[Apr 8 23:09] systemd-fstab-generator[3506]: Ignoring "noauto" option for root device
	[  +0.614168] systemd-fstab-generator[3549]: Ignoring "noauto" option for root device
	[  +0.260567] systemd-fstab-generator[3561]: Ignoring "noauto" option for root device
	[  +0.277633] systemd-fstab-generator[3575]: Ignoring "noauto" option for root device
	[  +5.335755] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 23:13:39 up 7 min,  0 users,  load average: 0.06, 0.14, 0.09
	Linux functional-618200 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 08 23:13:30 functional-618200 kubelet[2292]: E0408 23:13:30.150168    2292 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 4m3.140191683s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Apr 08 23:13:31 functional-618200 kubelet[2292]: E0408 23:13:31.214019    2292 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events/etcd-functional-618200.18347a9c57ea8b50\": dial tcp 192.168.113.37:8441: connect: connection refused" event="&Event{ObjectMeta:{etcd-functional-618200.18347a9c57ea8b50  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:etcd-functional-618200,UID:9fb511c70f1101c6e5f88375ee4557ca,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://127.0.0.1:2381/readyz\": dial tcp 127.0.0.1:2381: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-618200,},FirstTimestamp:2025-04-08 23:09:27.607700304 +0000 UTC m=+93.838881293,LastTimestamp:2025-04-08 23:09:28.607457383 +0000 UTC m=+94.8
38638272,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-618200,}"
	Apr 08 23:13:33 functional-618200 kubelet[2292]: I0408 23:13:33.982101    2292 status_manager.go:890] "Failed to get status for pod" podUID="2d86200df590720b9ed4835cb131ef10" pod="kube-system/kube-scheduler-functional-618200" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-618200\": dial tcp 192.168.113.37:8441: connect: connection refused"
	Apr 08 23:13:33 functional-618200 kubelet[2292]: I0408 23:13:33.983090    2292 status_manager.go:890] "Failed to get status for pod" podUID="9fb511c70f1101c6e5f88375ee4557ca" pod="kube-system/etcd-functional-618200" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-618200\": dial tcp 192.168.113.37:8441: connect: connection refused"
	Apr 08 23:13:33 functional-618200 kubelet[2292]: I0408 23:13:33.984105    2292 status_manager.go:890] "Failed to get status for pod" podUID="195f529b1fbee47263ef9fc136a700cc" pod="kube-system/kube-apiserver-functional-618200" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-618200\": dial tcp 192.168.113.37:8441: connect: connection refused"
	Apr 08 23:13:34 functional-618200 kubelet[2292]: E0408 23:13:34.352732    2292 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-618200?timeout=10s\": dial tcp 192.168.113.37:8441: connect: connection refused" interval="7s"
	Apr 08 23:13:35 functional-618200 kubelet[2292]: E0408 23:13:35.151585    2292 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 4m8.14158926s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Apr 08 23:13:39 functional-618200 kubelet[2292]: E0408 23:13:39.463183    2292 log.go:32] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:13:39 functional-618200 kubelet[2292]: E0408 23:13:39.463231    2292 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:13:39 functional-618200 kubelet[2292]: E0408 23:13:39.463324    2292 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 08 23:13:39 functional-618200 kubelet[2292]: E0408 23:13:39.463386    2292 container_log_manager.go:197] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:13:39 functional-618200 kubelet[2292]: E0408 23:13:39.463520    2292 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 08 23:13:39 functional-618200 kubelet[2292]: E0408 23:13:39.463550    2292 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:13:39 functional-618200 kubelet[2292]: I0408 23:13:39.463562    2292 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:13:39 functional-618200 kubelet[2292]: E0408 23:13:39.463598    2292 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 08 23:13:39 functional-618200 kubelet[2292]: E0408 23:13:39.463612    2292 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:13:39 functional-618200 kubelet[2292]: I0408 23:13:39.463724    2292 image_gc_manager.go:214] "Failed to monitor images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:13:39 functional-618200 kubelet[2292]: E0408 23:13:39.463797    2292 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 08 23:13:39 functional-618200 kubelet[2292]: E0408 23:13:39.463923    2292 kuberuntime_sandbox.go:305] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:13:39 functional-618200 kubelet[2292]: E0408 23:13:39.464045    2292 generic.go:256] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:13:39 functional-618200 kubelet[2292]: E0408 23:13:39.464080    2292 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 08 23:13:39 functional-618200 kubelet[2292]: E0408 23:13:39.464118    2292 kuberuntime_container.go:508] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:13:39 functional-618200 kubelet[2292]: E0408 23:13:39.465216    2292 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 08 23:13:39 functional-618200 kubelet[2292]: E0408 23:13:39.465296    2292 kuberuntime_container.go:508] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Apr 08 23:13:39 functional-618200 kubelet[2292]: E0408 23:13:39.466677    2292 kubelet.go:1529] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0408 23:11:38.985376   12360 logs.go:279] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:11:39.018309   12360 logs.go:279] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:11:39.054845   12360 logs.go:279] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:11:39.087714   12360 logs.go:279] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:11:39.118847   12360 logs.go:279] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:11:39.148989   12360 logs.go:279] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:12:39.237093   12360 logs.go:279] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:12:39.272948   12360 logs.go:279] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-618200 -n functional-618200
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-618200 -n functional-618200: exit status 2 (11.7585955s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-618200" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (342.73s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (120.5s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-618200 get po -A
functional_test.go:713: (dbg) Non-zero exit: kubectl --context functional-618200 get po -A: exit status 1 (10.388785s)

                                                
                                                
** stderr ** 
	E0408 23:13:54.145356    8700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.113.37:8441/api?timeout=32s\": dial tcp 192.168.113.37:8441: connectex: No connection could be made because the target machine actively refused it."
	E0408 23:13:56.256666    8700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.113.37:8441/api?timeout=32s\": dial tcp 192.168.113.37:8441: connectex: No connection could be made because the target machine actively refused it."
	E0408 23:13:58.293526    8700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.113.37:8441/api?timeout=32s\": dial tcp 192.168.113.37:8441: connectex: No connection could be made because the target machine actively refused it."
	E0408 23:14:00.348558    8700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.113.37:8441/api?timeout=32s\": dial tcp 192.168.113.37:8441: connectex: No connection could be made because the target machine actively refused it."
	E0408 23:14:02.378090    8700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.113.37:8441/api?timeout=32s\": dial tcp 192.168.113.37:8441: connectex: No connection could be made because the target machine actively refused it."
	Unable to connect to the server: dial tcp 192.168.113.37:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:715: failed to get kubectl pods: args "kubectl --context functional-618200 get po -A" : exit status 1
functional_test.go:719: expected stderr to be empty but got *"E0408 23:13:54.145356    8700 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.113.37:8441/api?timeout=32s\\\": dial tcp 192.168.113.37:8441: connectex: No connection could be made because the target machine actively refused it.\"\nE0408 23:13:56.256666    8700 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.113.37:8441/api?timeout=32s\\\": dial tcp 192.168.113.37:8441: connectex: No connection could be made because the target machine actively refused it.\"\nE0408 23:13:58.293526    8700 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.113.37:8441/api?timeout=32s\\\": dial tcp 192.168.113.37:8441: connectex: No connection could be made because the target machine actively refused it.\"\nE0408 23:14:00.348558    8700 memcache.go:265] \"Unhandled Error\" err=\"cou
ldn't get current server API group list: Get \\\"https://192.168.113.37:8441/api?timeout=32s\\\": dial tcp 192.168.113.37:8441: connectex: No connection could be made because the target machine actively refused it.\"\nE0408 23:14:02.378090    8700 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.113.37:8441/api?timeout=32s\\\": dial tcp 192.168.113.37:8441: connectex: No connection could be made because the target machine actively refused it.\"\nUnable to connect to the server: dial tcp 192.168.113.37:8441: connectex: No connection could be made because the target machine actively refused it.\n"*: args "kubectl --context functional-618200 get po -A"
functional_test.go:722: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-618200 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-618200 -n functional-618200
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-618200 -n functional-618200: exit status 2 (11.7995006s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-618200 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-618200 logs -n 25: (1m26.0862657s)
helpers_test.go:252: TestFunctional/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                 Args                                  |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| ip      | addons-582000 ip                                                      | addons-582000     | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:56 UTC | 08 Apr 25 22:56 UTC |
	| addons  | addons-582000 addons disable                                          | addons-582000     | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:56 UTC | 08 Apr 25 22:56 UTC |
	|         | ingress-dns --alsologtostderr                                         |                   |                   |         |                     |                     |
	|         | -v=1                                                                  |                   |                   |         |                     |                     |
	| addons  | addons-582000 addons                                                  | addons-582000     | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:56 UTC | 08 Apr 25 22:56 UTC |
	|         | disable csi-hostpath-driver                                           |                   |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                |                   |                   |         |                     |                     |
	| addons  | addons-582000 addons disable                                          | addons-582000     | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:56 UTC | 08 Apr 25 22:57 UTC |
	|         | ingress --alsologtostderr -v=1                                        |                   |                   |         |                     |                     |
	| stop    | -p addons-582000                                                      | addons-582000     | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:57 UTC | 08 Apr 25 22:57 UTC |
	| addons  | enable dashboard -p                                                   | addons-582000     | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:57 UTC | 08 Apr 25 22:57 UTC |
	|         | addons-582000                                                         |                   |                   |         |                     |                     |
	| addons  | disable dashboard -p                                                  | addons-582000     | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:57 UTC | 08 Apr 25 22:57 UTC |
	|         | addons-582000                                                         |                   |                   |         |                     |                     |
	| addons  | disable gvisor -p                                                     | addons-582000     | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:57 UTC | 08 Apr 25 22:57 UTC |
	|         | addons-582000                                                         |                   |                   |         |                     |                     |
	| delete  | -p addons-582000                                                      | addons-582000     | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:57 UTC | 08 Apr 25 22:58 UTC |
	| start   | -p nospam-268300 -n=1 --memory=2250 --wait=false                      | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:58 UTC | 08 Apr 25 23:01 UTC |
	|         | --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 |                   |                   |         |                     |                     |
	|         | --driver=hyperv                                                       |                   |                   |         |                     |                     |
	| start   | nospam-268300 --log_dir                                               | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:01 UTC |                     |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300           |                   |                   |         |                     |                     |
	|         | start --dry-run                                                       |                   |                   |         |                     |                     |
	| start   | nospam-268300 --log_dir                                               | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:01 UTC |                     |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300           |                   |                   |         |                     |                     |
	|         | start --dry-run                                                       |                   |                   |         |                     |                     |
	| start   | nospam-268300 --log_dir                                               | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:02 UTC |                     |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300           |                   |                   |         |                     |                     |
	|         | start --dry-run                                                       |                   |                   |         |                     |                     |
	| pause   | nospam-268300 --log_dir                                               | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:02 UTC | 08 Apr 25 23:02 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300           |                   |                   |         |                     |                     |
	|         | pause                                                                 |                   |                   |         |                     |                     |
	| pause   | nospam-268300 --log_dir                                               | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:02 UTC | 08 Apr 25 23:02 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300           |                   |                   |         |                     |                     |
	|         | pause                                                                 |                   |                   |         |                     |                     |
	| pause   | nospam-268300 --log_dir                                               | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:02 UTC | 08 Apr 25 23:03 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300           |                   |                   |         |                     |                     |
	|         | pause                                                                 |                   |                   |         |                     |                     |
	| unpause | nospam-268300 --log_dir                                               | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:03 UTC | 08 Apr 25 23:03 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300           |                   |                   |         |                     |                     |
	|         | unpause                                                               |                   |                   |         |                     |                     |
	| unpause | nospam-268300 --log_dir                                               | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:03 UTC | 08 Apr 25 23:03 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300           |                   |                   |         |                     |                     |
	|         | unpause                                                               |                   |                   |         |                     |                     |
	| unpause | nospam-268300 --log_dir                                               | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:03 UTC | 08 Apr 25 23:03 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300           |                   |                   |         |                     |                     |
	|         | unpause                                                               |                   |                   |         |                     |                     |
	| stop    | nospam-268300 --log_dir                                               | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:03 UTC | 08 Apr 25 23:04 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300           |                   |                   |         |                     |                     |
	|         | stop                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-268300 --log_dir                                               | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:04 UTC | 08 Apr 25 23:04 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300           |                   |                   |         |                     |                     |
	|         | stop                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-268300 --log_dir                                               | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:04 UTC | 08 Apr 25 23:04 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300           |                   |                   |         |                     |                     |
	|         | stop                                                                  |                   |                   |         |                     |                     |
	| delete  | -p nospam-268300                                                      | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:04 UTC | 08 Apr 25 23:04 UTC |
	| start   | -p functional-618200                                                  | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:04 UTC | 08 Apr 25 23:08 UTC |
	|         | --memory=4000                                                         |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                                 |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                            |                   |                   |         |                     |                     |
	| start   | -p functional-618200                                                  | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:08 UTC |                     |
	|         | --alsologtostderr -v=8                                                |                   |                   |         |                     |                     |
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/08 23:08:09
	Running on machine: minikube6
	Binary: Built with gc go1.24.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 23:08:09.246712   12728 out.go:345] Setting OutFile to fd 812 ...
	I0408 23:08:09.325819   12728 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 23:08:09.325819   12728 out.go:358] Setting ErrFile to fd 1352...
	I0408 23:08:09.325819   12728 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 23:08:09.346759   12728 out.go:352] Setting JSON to false
	I0408 23:08:09.349936   12728 start.go:129] hostinfo: {"hostname":"minikube6","uptime":10687,"bootTime":1744143002,"procs":176,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5679 Build 19045.5679","kernelVersion":"10.0.19045.5679 Build 19045.5679","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0408 23:08:09.349936   12728 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 23:08:09.354680   12728 out.go:177] * [functional-618200] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
	I0408 23:08:09.360335   12728 notify.go:220] Checking for updates...
	I0408 23:08:09.363251   12728 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0408 23:08:09.365934   12728 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 23:08:09.370015   12728 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0408 23:08:09.372261   12728 out.go:177]   - MINIKUBE_LOCATION=20501
	I0408 23:08:09.376217   12728 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 23:08:09.380199   12728 config.go:182] Loaded profile config "functional-618200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0408 23:08:09.380595   12728 driver.go:404] Setting default libvirt URI to qemu:///system
	I0408 23:08:14.781214   12728 out.go:177] * Using the hyperv driver based on existing profile
	I0408 23:08:14.787195   12728 start.go:297] selected driver: hyperv
	I0408 23:08:14.787195   12728 start.go:901] validating driver "hyperv" against &{Name:functional-618200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 Clust
erName:functional-618200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.113.37 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 23:08:14.788108   12728 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 23:08:14.840719   12728 cni.go:84] Creating CNI manager for ""
	I0408 23:08:14.840719   12728 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 23:08:14.840719   12728 start.go:340] cluster config:
	{Name:functional-618200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-618200 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.113.37 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 23:08:14.840719   12728 iso.go:125] acquiring lock: {Name:mk49322cc4182124f5e9cd1631076166a7ff463d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 23:08:14.844868   12728 out.go:177] * Starting "functional-618200" primary control-plane node in "functional-618200" cluster
	I0408 23:08:14.847279   12728 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0408 23:08:14.847279   12728 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
	I0408 23:08:14.847279   12728 cache.go:56] Caching tarball of preloaded images
	I0408 23:08:14.847279   12728 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0408 23:08:14.847279   12728 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0408 23:08:14.848442   12728 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-618200\config.json ...
	I0408 23:08:14.850635   12728 start.go:360] acquireMachinesLock for functional-618200: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 23:08:14.850635   12728 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-618200"
	I0408 23:08:14.851114   12728 start.go:96] Skipping create...Using existing machine configuration
	I0408 23:08:14.851183   12728 fix.go:54] fixHost starting: 
	I0408 23:08:14.851361   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:17.635558   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:17.636077   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:17.636077   12728 fix.go:112] recreateIfNeeded on functional-618200: state=Running err=<nil>
	W0408 23:08:17.636077   12728 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 23:08:17.641199   12728 out.go:177] * Updating the running hyperv "functional-618200" VM ...
	I0408 23:08:17.643270   12728 machine.go:93] provisionDockerMachine start ...
	I0408 23:08:17.643828   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:19.832353   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:19.832353   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:19.833486   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:08:22.348787   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:08:22.348787   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:22.354331   12728 main.go:141] libmachine: Using SSH client type: native
	I0408 23:08:22.354942   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1427d00] 0x142a840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:08:22.354942   12728 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 23:08:22.482052   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-618200
	
	I0408 23:08:22.482109   12728 buildroot.go:166] provisioning hostname "functional-618200"
	I0408 23:08:22.482218   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:24.614743   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:24.615199   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:24.615199   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:08:27.116022   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:08:27.116669   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:27.122660   12728 main.go:141] libmachine: Using SSH client type: native
	I0408 23:08:27.122837   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1427d00] 0x142a840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:08:27.122837   12728 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-618200 && echo "functional-618200" | sudo tee /etc/hostname
	I0408 23:08:27.296048   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-618200
	
	I0408 23:08:27.296048   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:29.515938   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:29.516732   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:29.516860   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:08:32.104430   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:08:32.104430   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:32.111087   12728 main.go:141] libmachine: Using SSH client type: native
	I0408 23:08:32.111822   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1427d00] 0x142a840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:08:32.111822   12728 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-618200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-618200/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-618200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 23:08:32.239307   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 23:08:32.239307   12728 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0408 23:08:32.239307   12728 buildroot.go:174] setting up certificates
	I0408 23:08:32.239307   12728 provision.go:84] configureAuth start
	I0408 23:08:32.239907   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:34.375660   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:34.376637   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:34.376637   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:08:36.940152   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:08:36.940811   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:36.940910   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:39.102003   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:39.102003   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:39.102003   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:08:41.651752   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:08:41.651752   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:41.651752   12728 provision.go:143] copyHostCerts
	I0408 23:08:41.652744   12728 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0408 23:08:41.653241   12728 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0408 23:08:41.653241   12728 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0408 23:08:41.653897   12728 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0408 23:08:41.655530   12728 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0408 23:08:41.655919   12728 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0408 23:08:41.655919   12728 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0408 23:08:41.656607   12728 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0408 23:08:41.657919   12728 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0408 23:08:41.658240   12728 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0408 23:08:41.658370   12728 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0408 23:08:41.658791   12728 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0408 23:08:41.659993   12728 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-618200 san=[127.0.0.1 192.168.113.37 functional-618200 localhost minikube]
	I0408 23:08:41.724180   12728 provision.go:177] copyRemoteCerts
	I0408 23:08:41.734528   12728 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 23:08:41.734661   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:43.857555   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:43.858453   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:43.858453   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:08:46.376433   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:08:46.376433   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:46.376862   12728 sshutil.go:53] new ssh client: &{IP:192.168.113.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-618200\id_rsa Username:docker}
	I0408 23:08:46.479933   12728 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7452489s)
	I0408 23:08:46.479933   12728 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0408 23:08:46.480251   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0408 23:08:46.526275   12728 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0408 23:08:46.526275   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0408 23:08:46.571513   12728 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0408 23:08:46.571513   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 23:08:46.618636   12728 provision.go:87] duration metric: took 14.3791442s to configureAuth
	I0408 23:08:46.618636   12728 buildroot.go:189] setting minikube options for container-runtime
	I0408 23:08:46.619360   12728 config.go:182] Loaded profile config "functional-618200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0408 23:08:46.619360   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:48.759145   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:48.759997   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:48.760072   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:08:51.352431   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:08:51.352840   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:51.358422   12728 main.go:141] libmachine: Using SSH client type: native
	I0408 23:08:51.359181   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1427d00] 0x142a840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:08:51.359181   12728 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0408 23:08:51.498239   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0408 23:08:51.498239   12728 buildroot.go:70] root file system type: tmpfs
	I0408 23:08:51.499500   12728 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0408 23:08:51.499565   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:53.639609   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:53.639609   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:53.639706   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:08:56.165286   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:08:56.165286   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:56.172269   12728 main.go:141] libmachine: Using SSH client type: native
	I0408 23:08:56.172483   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1427d00] 0x142a840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:08:56.172483   12728 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0408 23:08:56.329047   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0408 23:08:56.329209   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:58.408221   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:58.408271   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:58.408271   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:09:00.972449   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:09:00.972449   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:00.978298   12728 main.go:141] libmachine: Using SSH client type: native
	I0408 23:09:00.979066   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1427d00] 0x142a840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:09:00.979150   12728 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0408 23:09:01.120743   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 23:09:01.120743   12728 machine.go:96] duration metric: took 43.4763536s to provisionDockerMachine
	I0408 23:09:01.120743   12728 start.go:293] postStartSetup for "functional-618200" (driver="hyperv")
	I0408 23:09:01.120743   12728 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 23:09:01.134465   12728 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 23:09:01.134586   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:09:03.239597   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:09:03.239597   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:03.240300   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:09:05.769173   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:09:05.769791   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:05.769977   12728 sshutil.go:53] new ssh client: &{IP:192.168.113.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-618200\id_rsa Username:docker}
	I0408 23:09:05.882717   12728 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7480703s)
	I0408 23:09:05.895357   12728 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 23:09:05.906701   12728 command_runner.go:130] > NAME=Buildroot
	I0408 23:09:05.906871   12728 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0408 23:09:05.906871   12728 command_runner.go:130] > ID=buildroot
	I0408 23:09:05.906871   12728 command_runner.go:130] > VERSION_ID=2023.02.9
	I0408 23:09:05.906871   12728 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0408 23:09:05.906871   12728 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 23:09:05.906871   12728 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0408 23:09:05.907746   12728 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0408 23:09:05.909230   12728 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> 98642.pem in /etc/ssl/certs
	I0408 23:09:05.909297   12728 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> /etc/ssl/certs/98642.pem
	I0408 23:09:05.909974   12728 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9864\hosts -> hosts in /etc/test/nested/copy/9864
	I0408 23:09:05.909974   12728 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9864\hosts -> /etc/test/nested/copy/9864/hosts
	I0408 23:09:05.922022   12728 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/9864
	I0408 23:09:05.940207   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem --> /etc/ssl/certs/98642.pem (1708 bytes)
	I0408 23:09:05.986656   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9864\hosts --> /etc/test/nested/copy/9864/hosts (40 bytes)
	I0408 23:09:06.037448   12728 start.go:296] duration metric: took 4.9164478s for postStartSetup
	I0408 23:09:06.037545   12728 fix.go:56] duration metric: took 51.1857011s for fixHost
	I0408 23:09:06.037624   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:09:08.158094   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:09:08.158094   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:08.158094   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:09:10.681527   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:09:10.681527   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:10.688411   12728 main.go:141] libmachine: Using SSH client type: native
	I0408 23:09:10.689102   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1427d00] 0x142a840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:09:10.689245   12728 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0408 23:09:10.829582   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744153750.860325411
	
	I0408 23:09:10.829582   12728 fix.go:216] guest clock: 1744153750.860325411
	I0408 23:09:10.829683   12728 fix.go:229] Guest: 2025-04-08 23:09:10.860325411 +0000 UTC Remote: 2025-04-08 23:09:06.0375451 +0000 UTC m=+56.890513901 (delta=4.822780311s)
	I0408 23:09:10.829858   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:09:12.957017   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:09:12.957017   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:12.957017   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:09:15.521412   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:09:15.521412   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:15.527916   12728 main.go:141] libmachine: Using SSH client type: native
	I0408 23:09:15.528634   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1427d00] 0x142a840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:09:15.528634   12728 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1744153750
	I0408 23:09:15.671072   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr  8 23:09:10 UTC 2025
	
	I0408 23:09:15.671072   12728 fix.go:236] clock set: Tue Apr  8 23:09:10 UTC 2025
	 (err=<nil>)
	I0408 23:09:15.671072   12728 start.go:83] releasing machines lock for "functional-618200", held for 1m0.8196519s
	I0408 23:09:15.671072   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:09:17.795924   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:09:17.795924   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:17.795924   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:09:20.343976   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:09:20.344152   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:20.347691   12728 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0408 23:09:20.347691   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:09:20.358515   12728 ssh_runner.go:195] Run: cat /version.json
	I0408 23:09:20.358515   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:09:22.544260   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:09:22.544260   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:22.544260   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:09:22.547450   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:09:22.547450   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:22.547565   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:09:25.306292   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:09:25.306292   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:25.306292   12728 sshutil.go:53] new ssh client: &{IP:192.168.113.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-618200\id_rsa Username:docker}
	I0408 23:09:25.329784   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:09:25.330858   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:25.330972   12728 sshutil.go:53] new ssh client: &{IP:192.168.113.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-618200\id_rsa Username:docker}
	I0408 23:09:25.407167   12728 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0408 23:09:25.407167   12728 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.0594111s)
	W0408 23:09:25.407380   12728 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0408 23:09:25.427823   12728 command_runner.go:130] > {"iso_version": "v1.35.0", "kicbase_version": "v0.0.45-1736763277-20236", "minikube_version": "v1.35.0", "commit": "3fb24bd87c8c8761e2515e1a9ee13835a389ed68"}
	I0408 23:09:25.427823   12728 ssh_runner.go:235] Completed: cat /version.json: (5.0692422s)
	I0408 23:09:25.441651   12728 ssh_runner.go:195] Run: systemctl --version
	I0408 23:09:25.452009   12728 command_runner.go:130] > systemd 252 (252)
	I0408 23:09:25.452009   12728 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0408 23:09:25.462226   12728 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0408 23:09:25.470182   12728 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0408 23:09:25.470647   12728 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 23:09:25.483329   12728 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 23:09:25.504611   12728 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0408 23:09:25.504611   12728 start.go:495] detecting cgroup driver to use...
	I0408 23:09:25.505055   12728 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0408 23:09:25.518103   12728 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0408 23:09:25.518165   12728 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0408 23:09:25.545691   12728 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0408 23:09:25.557677   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0408 23:09:25.585837   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0408 23:09:25.605727   12728 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0408 23:09:25.616269   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0408 23:09:25.648654   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 23:09:25.682043   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0408 23:09:25.712502   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 23:09:25.745703   12728 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 23:09:25.776089   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0408 23:09:25.813738   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0408 23:09:25.847440   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0408 23:09:25.878964   12728 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 23:09:25.897917   12728 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0408 23:09:25.910039   12728 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 23:09:25.937635   12728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:09:26.191579   12728 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0408 23:09:26.223263   12728 start.go:495] detecting cgroup driver to use...
	I0408 23:09:26.235750   12728 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0408 23:09:26.260048   12728 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0408 23:09:26.260125   12728 command_runner.go:130] > [Unit]
	I0408 23:09:26.260125   12728 command_runner.go:130] > Description=Docker Application Container Engine
	I0408 23:09:26.260125   12728 command_runner.go:130] > Documentation=https://docs.docker.com
	I0408 23:09:26.260200   12728 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0408 23:09:26.260200   12728 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0408 23:09:26.260200   12728 command_runner.go:130] > StartLimitBurst=3
	I0408 23:09:26.260200   12728 command_runner.go:130] > StartLimitIntervalSec=60
	I0408 23:09:26.260200   12728 command_runner.go:130] > [Service]
	I0408 23:09:26.260200   12728 command_runner.go:130] > Type=notify
	I0408 23:09:26.260200   12728 command_runner.go:130] > Restart=on-failure
	I0408 23:09:26.260338   12728 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0408 23:09:26.260338   12728 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0408 23:09:26.260338   12728 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0408 23:09:26.260338   12728 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0408 23:09:26.260338   12728 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0408 23:09:26.260472   12728 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0408 23:09:26.260472   12728 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0408 23:09:26.260472   12728 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0408 23:09:26.260472   12728 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0408 23:09:26.260472   12728 command_runner.go:130] > ExecStart=
	I0408 23:09:26.260472   12728 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0408 23:09:26.260581   12728 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0408 23:09:26.260581   12728 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0408 23:09:26.260581   12728 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0408 23:09:26.260581   12728 command_runner.go:130] > LimitNOFILE=infinity
	I0408 23:09:26.260678   12728 command_runner.go:130] > LimitNPROC=infinity
	I0408 23:09:26.260707   12728 command_runner.go:130] > LimitCORE=infinity
	I0408 23:09:26.260707   12728 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0408 23:09:26.260707   12728 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0408 23:09:26.260764   12728 command_runner.go:130] > TasksMax=infinity
	I0408 23:09:26.260764   12728 command_runner.go:130] > TimeoutStartSec=0
	I0408 23:09:26.260764   12728 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0408 23:09:26.260764   12728 command_runner.go:130] > Delegate=yes
	I0408 23:09:26.260802   12728 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0408 23:09:26.260802   12728 command_runner.go:130] > KillMode=process
	I0408 23:09:26.260847   12728 command_runner.go:130] > [Install]
	I0408 23:09:26.260847   12728 command_runner.go:130] > WantedBy=multi-user.target
	I0408 23:09:26.272013   12728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 23:09:26.309047   12728 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 23:09:26.364238   12728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 23:09:26.397809   12728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0408 23:09:26.420470   12728 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 23:09:26.452776   12728 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0408 23:09:26.465171   12728 ssh_runner.go:195] Run: which cri-dockerd
	I0408 23:09:26.471612   12728 command_runner.go:130] > /usr/bin/cri-dockerd
	I0408 23:09:26.483601   12728 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0408 23:09:26.500243   12728 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0408 23:09:26.541951   12728 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0408 23:09:26.818543   12728 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0408 23:09:27.059393   12728 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0408 23:09:27.059393   12728 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0408 23:09:27.105693   12728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:09:27.332438   12728 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0408 23:10:38.780025   12728 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0408 23:10:38.780100   12728 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0408 23:10:38.783775   12728 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.4502693s)
	I0408 23:10:38.797107   12728 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0408 23:10:38.826638   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	I0408 23:10:38.826758   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.094333857Z" level=info msg="Starting up"
	I0408 23:10:38.826758   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.095749501Z" level=info msg="containerd not running, starting managed containerd"
	I0408 23:10:38.826815   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.097506580Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=673
	I0408 23:10:38.826815   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.128963677Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0408 23:10:38.826815   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152469766Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0408 23:10:38.826815   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152558876Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0408 23:10:38.826815   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152717392Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0408 23:10:38.826815   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152739794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.827006   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152812201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.827006   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152901110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.827074   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153079328Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.827097   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153169038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.827157   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153187739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.827181   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153197940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.827181   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153293950Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.827260   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153812303Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.827260   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156561482Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.827340   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156716198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156848512Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156952822Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.157044531Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.157169744Z" level=info msg="metadata content store policy set" policy=shared
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190389421Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190521734Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190544737Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190560338Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190576740Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190838067Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191154799Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191361820Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191472031Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191493633Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191512135Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191527737Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191541238Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191555639Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191571341Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191603144Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.827985   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191615846Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.828081   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191628447Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.828188   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191749659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828234   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191774162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828234   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191800364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828234   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191815666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828308   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191830867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828308   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191844669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828356   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191857670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828356   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191870171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828426   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191882273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828426   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191897274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828489   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191908775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828524   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191920677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828613   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191932778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828649   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191947379Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0408 23:10:38.828649   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191967081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828790   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191979383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828855   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191992484Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0408 23:10:38.828855   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192114796Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0408 23:10:38.828935   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192196605Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192262611Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192291214Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192304416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192318917Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192331918Z" level=info msg="NRI interface is disabled by configuration."
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193151202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193285015Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193371424Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193820570Z" level=info msg="containerd successfully booted in 0.066941s"
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.170474987Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.203429127Z" level=info msg="Loading containers: start."
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.350665658Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.583414712Z" level=info msg="Loading containers: done."
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.608611503Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.608776419Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.609056647Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.609260067Z" level=info msg="Daemon has completed initialization"
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.713909013Z" level=info msg="API listen on /var/run/docker.sock"
	I0408 23:10:38.829565   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.714066029Z" level=info msg="API listen on [::]:2376"
	I0408 23:10:38.829565   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 systemd[1]: Started Docker Application Container Engine.
	I0408 23:10:38.829625   12728 command_runner.go:130] > Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.811241096Z" level=info msg="Processing signal 'terminated'"
	I0408 23:10:38.829625   12728 command_runner.go:130] > Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813084503Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0408 23:10:38.829625   12728 command_runner.go:130] > Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813257403Z" level=info msg="Daemon shutdown complete"
	I0408 23:10:38.829625   12728 command_runner.go:130] > Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813288003Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0408 23:10:38.829753   12728 command_runner.go:130] > Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813374004Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0408 23:10:38.829753   12728 command_runner.go:130] > Apr 08 23:07:20 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	I0408 23:10:38.829790   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	I0408 23:10:38.829942   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	I0408 23:10:38.829942   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	I0408 23:10:38.829942   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.861204748Z" level=info msg="Starting up"
	I0408 23:10:38.830042   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.863521556Z" level=info msg="containerd not running, starting managed containerd"
	I0408 23:10:38.830042   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.864856161Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1097
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.891008554Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913514335Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913559535Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913591835Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913605435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913626835Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913637435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913748735Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913963436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913985636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913996836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.914019636Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.914159537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.916995847Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917087147Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917210048Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917295148Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0408 23:10:38.830797   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917328148Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0408 23:10:38.830797   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917346448Z" level=info msg="metadata content store policy set" policy=shared
	I0408 23:10:38.830797   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917634649Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0408 23:10:38.830869   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917741950Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0408 23:10:38.830869   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917760750Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0408 23:10:38.830869   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917900050Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0408 23:10:38.830869   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917914850Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0408 23:10:38.830869   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917957150Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918196151Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918327452Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918413452Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918430852Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918442352Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918453152Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918462452Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918473352Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918484552Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918499152Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.831194   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918509952Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.831194   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918520052Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.831194   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918543853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831194   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918558553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831300   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918568953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831300   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918579553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831300   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918589553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831377   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918609253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831377   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918626253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831422   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918638253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831442   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918657853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831442   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918673253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831442   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918682953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918692253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918702953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918715553Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918733953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918744753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918754653Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918959554Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919161355Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919325455Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919361655Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919372055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919407356Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919416356Z" level=info msg="NRI interface is disabled by configuration."
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919735157Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919968758Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.920117658Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.920171758Z" level=info msg="containerd successfully booted in 0.029982s"
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:22 functional-618200 dockerd[1091]: time="2025-04-08T23:07:22.908709690Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:22 functional-618200 dockerd[1091]: time="2025-04-08T23:07:22.934950284Z" level=info msg="Loading containers: start."
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.062615440Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.175164242Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.282062124Z" level=info msg="Loading containers: done."
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.305666909Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.305777709Z" level=info msg="Daemon has completed initialization"
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.341856738Z" level=info msg="API listen on /var/run/docker.sock"
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:23 functional-618200 systemd[1]: Started Docker Application Container Engine.
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.343491744Z" level=info msg="API listen on [::]:2376"
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:32 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.905143108Z" level=info msg="Processing signal 'terminated'"
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906371813Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906906114Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.907033815Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906918515Z" level=info msg="Daemon shutdown complete"
	I0408 23:10:38.832201   12728 command_runner.go:130] > Apr 08 23:07:33 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	I0408 23:10:38.832201   12728 command_runner.go:130] > Apr 08 23:07:33 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	I0408 23:10:38.832201   12728 command_runner.go:130] > Apr 08 23:07:33 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	I0408 23:10:38.832201   12728 command_runner.go:130] > Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.955484761Z" level=info msg="Starting up"
	I0408 23:10:38.832201   12728 command_runner.go:130] > Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.957042767Z" level=info msg="containerd not running, starting managed containerd"
	I0408 23:10:38.832402   12728 command_runner.go:130] > Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.958462672Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1462
	I0408 23:10:38.832402   12728 command_runner.go:130] > Apr 08 23:07:33 functional-618200 dockerd[1462]: time="2025-04-08T23:07:33.983507761Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0408 23:10:38.832440   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009132353Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0408 23:10:38.832440   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009242353Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0408 23:10:38.832490   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009307753Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0408 23:10:38.832524   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009324953Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.832524   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009354454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.832569   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009383954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.832619   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009545254Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.832619   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009658655Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.832619   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009680555Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.832619   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009691855Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.832619   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009717555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.832745   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.010024356Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.832794   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012580665Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.832826   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012671765Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.832878   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012945166Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.832917   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013039867Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0408 23:10:38.832917   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013070567Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0408 23:10:38.832917   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013104967Z" level=info msg="metadata content store policy set" policy=shared
	I0408 23:10:38.832975   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013460968Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0408 23:10:38.832996   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013562869Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013583269Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013598369Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013611569Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013659269Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014010570Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014156471Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014247371Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014266571Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014280071Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014397172Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014425272Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014441672Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014458272Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014472772Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014498972Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014515572Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014537972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014555672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014570972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833567   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014585972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833567   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014601072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014615672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014629372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014643572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014658573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014679173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014709673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014738473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014783273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014916873Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014942274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014955574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014969174Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015051774Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015092874Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015107074Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015122374Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015133174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015147174Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015158874Z" level=info msg="NRI interface is disabled by configuration."
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015573476Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015638476Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015690176Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015715476Z" level=info msg="containerd successfully booted in 0.033079s"
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:35 functional-618200 dockerd[1456]: time="2025-04-08T23:07:35.262471031Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:37 functional-618200 dockerd[1456]: time="2025-04-08T23:07:37.762713164Z" level=info msg="Loading containers: start."
	I0408 23:10:38.834237   12728 command_runner.go:130] > Apr 08 23:07:37 functional-618200 dockerd[1456]: time="2025-04-08T23:07:37.897446846Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0408 23:10:38.834237   12728 command_runner.go:130] > Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.015338367Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0408 23:10:38.834237   12728 command_runner.go:130] > Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.153824862Z" level=info msg="Loading containers: done."
	I0408 23:10:38.834237   12728 command_runner.go:130] > Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.182692065Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0408 23:10:38.834237   12728 command_runner.go:130] > Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.182937366Z" level=info msg="Daemon has completed initialization"
	I0408 23:10:38.834237   12728 command_runner.go:130] > Apr 08 23:07:38 functional-618200 systemd[1]: Started Docker Application Container Engine.
	I0408 23:10:38.834237   12728 command_runner.go:130] > Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.220981402Z" level=info msg="API listen on /var/run/docker.sock"
	I0408 23:10:38.834375   12728 command_runner.go:130] > Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.221045402Z" level=info msg="API listen on [::]:2376"
	I0408 23:10:38.834375   12728 command_runner.go:130] > Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928174323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.834375   12728 command_runner.go:130] > Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928255628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.834443   12728 command_runner.go:130] > Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928274329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834443   12728 command_runner.go:130] > Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928976471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834533   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011163114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011256119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011273420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011437330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.047888267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048098278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048281989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048657110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089143872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089470391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089714404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.090374541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.331240402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.331940241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.332248459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.332901095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587350115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587733437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587951349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.588255466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643351545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643476652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643513354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835183   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643620460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835183   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681369670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.835183   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681570881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.835294   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681658686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835294   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.682028307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835294   12728 command_runner.go:130] > Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.094044455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.835373   12728 command_runner.go:130] > Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.094486867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.835373   12728 command_runner.go:130] > Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.095561595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835373   12728 command_runner.go:130] > Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.097530446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835463   12728 command_runner.go:130] > Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394114311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.835463   12728 command_runner.go:130] > Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394433319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.835567   12728 command_runner.go:130] > Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394665025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835567   12728 command_runner.go:130] > Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.395349443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835567   12728 command_runner.go:130] > Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643182806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.835567   12728 command_runner.go:130] > Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643370211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.835723   12728 command_runner.go:130] > Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643392711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835723   12728 command_runner.go:130] > Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.645053352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835723   12728 command_runner.go:130] > Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216296816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.835827   12728 command_runner.go:130] > Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216387017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.835827   12728 command_runner.go:130] > Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216402117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835887   12728 command_runner.go:130] > Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216977424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.540620784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.540963288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.541044989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.541180590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.848480641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.850292361Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.850566464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.851150170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.385762643Z" level=info msg="Processing signal 'terminated'"
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574335274Z" level=info msg="shim disconnected" id=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574507675Z" level=warning msg="cleaning up after shim disconnected" id=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574520575Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.575374478Z" level=info msg="ignoring event" container=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.602965785Z" level=info msg="ignoring event" container=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.603895489Z" level=info msg="shim disconnected" id=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.604175090Z" level=warning msg="cleaning up after shim disconnected" id=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.604242890Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614380530Z" level=info msg="shim disconnected" id=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614605231Z" level=warning msg="cleaning up after shim disconnected" id=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614742231Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.620402053Z" level=info msg="ignoring event" container=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.620802455Z" level=info msg="shim disconnected" id=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.621015255Z" level=info msg="ignoring event" container=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.621947059Z" level=info msg="ignoring event" container=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.622304660Z" level=info msg="ignoring event" container=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622827062Z" level=warning msg="cleaning up after shim disconnected" id=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.623203064Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622314560Z" level=info msg="shim disconnected" id=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c namespace=moby
	I0408 23:10:38.836542   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.624293868Z" level=warning msg="cleaning up after shim disconnected" id=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c namespace=moby
	I0408 23:10:38.836542   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.624306868Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.836542   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622381461Z" level=info msg="shim disconnected" id=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c namespace=moby
	I0408 23:10:38.836542   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.631193795Z" level=warning msg="cleaning up after shim disconnected" id=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.631249695Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.667400535Z" level=info msg="ignoring event" container=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.669623644Z" level=info msg="shim disconnected" id=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.672188454Z" level=warning msg="cleaning up after shim disconnected" id=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.672924657Z" level=info msg="ignoring event" container=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.673767960Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.681394990Z" level=info msg="ignoring event" container=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.681607190Z" level=info msg="ignoring event" container=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.681903492Z" level=info msg="shim disconnected" id=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.685272405Z" level=warning msg="cleaning up after shim disconnected" id=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.685407505Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.671723952Z" level=info msg="shim disconnected" id=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.693693137Z" level=warning msg="cleaning up after shim disconnected" id=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.693789338Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697563052Z" level=info msg="shim disconnected" id=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697641053Z" level=warning msg="cleaning up after shim disconnected" id=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697654453Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.725345060Z" level=info msg="ignoring event" container=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.725697262Z" level=info msg="shim disconnected" id=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa namespace=moby
	I0408 23:10:38.837349   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.725980963Z" level=warning msg="cleaning up after shim disconnected" id=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa namespace=moby
	I0408 23:10:38.837349   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.726206964Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.837349   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.734018694Z" level=info msg="ignoring event" container=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.837349   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.736798905Z" level=info msg="shim disconnected" id=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 namespace=moby
	I0408 23:10:38.837581   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.737017505Z" level=warning msg="cleaning up after shim disconnected" id=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 namespace=moby
	I0408 23:10:38.837581   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.737255906Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.837581   12728 command_runner.go:130] > Apr 08 23:09:32 functional-618200 dockerd[1456]: time="2025-04-08T23:09:32.552363388Z" level=info msg="ignoring event" container=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.837653   12728 command_runner.go:130] > Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556138103Z" level=info msg="shim disconnected" id=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c namespace=moby
	I0408 23:10:38.837653   12728 command_runner.go:130] > Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556756905Z" level=warning msg="cleaning up after shim disconnected" id=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c namespace=moby
	I0408 23:10:38.837653   12728 command_runner.go:130] > Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556921006Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.837999   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.565876302Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.643029581Z" level=info msg="ignoring event" container=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.646699056Z" level=info msg="shim disconnected" id=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f namespace=moby
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.647140153Z" level=warning msg="cleaning up after shim disconnected" id=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f namespace=moby
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.647214253Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724363532Z" level=info msg="Daemon shutdown complete"
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724563130Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724658330Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724794029Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:38 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:38 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:38 functional-618200 systemd[1]: docker.service: Consumed 4.925s CPU time.
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:38 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:38 functional-618200 dockerd[3978]: time="2025-04-08T23:09:38.782261701Z" level=info msg="Starting up"
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:10:38 functional-618200 dockerd[3978]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:10:38 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:10:38 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:10:38 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	I0408 23:10:38.863518   12728 out.go:201] 
	W0408 23:10:38.867350   12728 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 08 23:06:49 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.094333857Z" level=info msg="Starting up"
	Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.095749501Z" level=info msg="containerd not running, starting managed containerd"
	Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.097506580Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=673
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.128963677Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152469766Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152558876Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152717392Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152739794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152812201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152901110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153079328Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153169038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153187739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153197940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153293950Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153812303Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156561482Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156716198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156848512Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156952822Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.157044531Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.157169744Z" level=info msg="metadata content store policy set" policy=shared
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190389421Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190521734Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190544737Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190560338Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190576740Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190838067Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191154799Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191361820Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191472031Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191493633Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191512135Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191527737Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191541238Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191555639Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191571341Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191603144Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191615846Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191628447Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191749659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191774162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191800364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191815666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191830867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191844669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191857670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191870171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191882273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191897274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191908775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191920677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191932778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191947379Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191967081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191979383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191992484Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192114796Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192196605Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192262611Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192291214Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192304416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192318917Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192331918Z" level=info msg="NRI interface is disabled by configuration."
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193151202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193285015Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193371424Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193820570Z" level=info msg="containerd successfully booted in 0.066941s"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.170474987Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.203429127Z" level=info msg="Loading containers: start."
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.350665658Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.583414712Z" level=info msg="Loading containers: done."
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.608611503Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.608776419Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.609056647Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.609260067Z" level=info msg="Daemon has completed initialization"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.713909013Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.714066029Z" level=info msg="API listen on [::]:2376"
	Apr 08 23:06:50 functional-618200 systemd[1]: Started Docker Application Container Engine.
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.811241096Z" level=info msg="Processing signal 'terminated'"
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813084503Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813257403Z" level=info msg="Daemon shutdown complete"
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813288003Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813374004Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 08 23:07:20 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	Apr 08 23:07:21 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	Apr 08 23:07:21 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:07:21 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.861204748Z" level=info msg="Starting up"
	Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.863521556Z" level=info msg="containerd not running, starting managed containerd"
	Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.864856161Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1097
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.891008554Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913514335Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913559535Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913591835Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913605435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913626835Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913637435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913748735Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913963436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913985636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913996836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.914019636Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.914159537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.916995847Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917087147Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917210048Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917295148Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917328148Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917346448Z" level=info msg="metadata content store policy set" policy=shared
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917634649Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917741950Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917760750Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917900050Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917914850Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917957150Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918196151Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918327452Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918413452Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918430852Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918442352Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918453152Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918462452Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918473352Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918484552Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918499152Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918509952Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918520052Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918543853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918558553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918568953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918579553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918589553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918609253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918626253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918638253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918657853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918673253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918682953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918692253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918702953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918715553Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918733953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918744753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918754653Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918959554Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919161355Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919325455Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919361655Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919372055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919407356Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919416356Z" level=info msg="NRI interface is disabled by configuration."
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919735157Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919968758Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.920117658Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.920171758Z" level=info msg="containerd successfully booted in 0.029982s"
	Apr 08 23:07:22 functional-618200 dockerd[1091]: time="2025-04-08T23:07:22.908709690Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 08 23:07:22 functional-618200 dockerd[1091]: time="2025-04-08T23:07:22.934950284Z" level=info msg="Loading containers: start."
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.062615440Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.175164242Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.282062124Z" level=info msg="Loading containers: done."
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.305666909Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.305777709Z" level=info msg="Daemon has completed initialization"
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.341856738Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 08 23:07:23 functional-618200 systemd[1]: Started Docker Application Container Engine.
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.343491744Z" level=info msg="API listen on [::]:2376"
	Apr 08 23:07:32 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.905143108Z" level=info msg="Processing signal 'terminated'"
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906371813Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906906114Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.907033815Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906918515Z" level=info msg="Daemon shutdown complete"
	Apr 08 23:07:33 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	Apr 08 23:07:33 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:07:33 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.955484761Z" level=info msg="Starting up"
	Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.957042767Z" level=info msg="containerd not running, starting managed containerd"
	Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.958462672Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1462
	Apr 08 23:07:33 functional-618200 dockerd[1462]: time="2025-04-08T23:07:33.983507761Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009132353Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009242353Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009307753Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009324953Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009354454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009383954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009545254Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009658655Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009680555Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009691855Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009717555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.010024356Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012580665Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012671765Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012945166Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013039867Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013070567Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013104967Z" level=info msg="metadata content store policy set" policy=shared
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013460968Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013562869Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013583269Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013598369Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013611569Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013659269Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014010570Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014156471Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014247371Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014266571Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014280071Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014397172Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014425272Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014441672Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014458272Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014472772Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014498972Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014515572Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014537972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014555672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014570972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014585972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014601072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014615672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014629372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014643572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014658573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014679173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014709673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014738473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014783273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014916873Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014942274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014955574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014969174Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015051774Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015092874Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015107074Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015122374Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015133174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015147174Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015158874Z" level=info msg="NRI interface is disabled by configuration."
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015573476Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015638476Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015690176Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015715476Z" level=info msg="containerd successfully booted in 0.033079s"
	Apr 08 23:07:35 functional-618200 dockerd[1456]: time="2025-04-08T23:07:35.262471031Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 08 23:07:37 functional-618200 dockerd[1456]: time="2025-04-08T23:07:37.762713164Z" level=info msg="Loading containers: start."
	Apr 08 23:07:37 functional-618200 dockerd[1456]: time="2025-04-08T23:07:37.897446846Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.015338367Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.153824862Z" level=info msg="Loading containers: done."
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.182692065Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.182937366Z" level=info msg="Daemon has completed initialization"
	Apr 08 23:07:38 functional-618200 systemd[1]: Started Docker Application Container Engine.
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.220981402Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.221045402Z" level=info msg="API listen on [::]:2376"
	Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928174323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928255628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928274329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928976471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011163114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011256119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011273420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011437330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.047888267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048098278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048281989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048657110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089143872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089470391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089714404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.090374541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.331240402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.331940241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.332248459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.332901095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587350115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587733437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587951349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.588255466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643351545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643476652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643513354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643620460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681369670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681570881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681658686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.682028307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.094044455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.094486867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.095561595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.097530446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394114311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394433319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394665025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.395349443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643182806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643370211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643392711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.645053352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216296816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216387017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216402117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216977424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.540620784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.540963288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.541044989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.541180590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.848480641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.850292361Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.850566464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.851150170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.385762643Z" level=info msg="Processing signal 'terminated'"
	Apr 08 23:09:27 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574335274Z" level=info msg="shim disconnected" id=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574507675Z" level=warning msg="cleaning up after shim disconnected" id=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574520575Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.575374478Z" level=info msg="ignoring event" container=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.602965785Z" level=info msg="ignoring event" container=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.603895489Z" level=info msg="shim disconnected" id=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.604175090Z" level=warning msg="cleaning up after shim disconnected" id=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.604242890Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614380530Z" level=info msg="shim disconnected" id=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614605231Z" level=warning msg="cleaning up after shim disconnected" id=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614742231Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.620402053Z" level=info msg="ignoring event" container=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.620802455Z" level=info msg="shim disconnected" id=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.621015255Z" level=info msg="ignoring event" container=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.621947059Z" level=info msg="ignoring event" container=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.622304660Z" level=info msg="ignoring event" container=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622827062Z" level=warning msg="cleaning up after shim disconnected" id=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.623203064Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622314560Z" level=info msg="shim disconnected" id=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.624293868Z" level=warning msg="cleaning up after shim disconnected" id=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.624306868Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622381461Z" level=info msg="shim disconnected" id=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.631193795Z" level=warning msg="cleaning up after shim disconnected" id=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.631249695Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.667400535Z" level=info msg="ignoring event" container=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.669623644Z" level=info msg="shim disconnected" id=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.672188454Z" level=warning msg="cleaning up after shim disconnected" id=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.672924657Z" level=info msg="ignoring event" container=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.673767960Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.681394990Z" level=info msg="ignoring event" container=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.681607190Z" level=info msg="ignoring event" container=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.681903492Z" level=info msg="shim disconnected" id=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.685272405Z" level=warning msg="cleaning up after shim disconnected" id=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.685407505Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.671723952Z" level=info msg="shim disconnected" id=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.693693137Z" level=warning msg="cleaning up after shim disconnected" id=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.693789338Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697563052Z" level=info msg="shim disconnected" id=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697641053Z" level=warning msg="cleaning up after shim disconnected" id=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697654453Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.725345060Z" level=info msg="ignoring event" container=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.725697262Z" level=info msg="shim disconnected" id=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.725980963Z" level=warning msg="cleaning up after shim disconnected" id=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.726206964Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.734018694Z" level=info msg="ignoring event" container=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.736798905Z" level=info msg="shim disconnected" id=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.737017505Z" level=warning msg="cleaning up after shim disconnected" id=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.737255906Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:32 functional-618200 dockerd[1456]: time="2025-04-08T23:09:32.552363388Z" level=info msg="ignoring event" container=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556138103Z" level=info msg="shim disconnected" id=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c namespace=moby
	Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556756905Z" level=warning msg="cleaning up after shim disconnected" id=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c namespace=moby
	Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556921006Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.565876302Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.643029581Z" level=info msg="ignoring event" container=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.646699056Z" level=info msg="shim disconnected" id=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.647140153Z" level=warning msg="cleaning up after shim disconnected" id=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.647214253Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724363532Z" level=info msg="Daemon shutdown complete"
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724563130Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724658330Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724794029Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 08 23:09:38 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	Apr 08 23:09:38 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:09:38 functional-618200 systemd[1]: docker.service: Consumed 4.925s CPU time.
	Apr 08 23:09:38 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:09:38 functional-618200 dockerd[3978]: time="2025-04-08T23:09:38.782261701Z" level=info msg="Starting up"
	Apr 08 23:10:38 functional-618200 dockerd[3978]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:10:38 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:10:38 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:10:38 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0408 23:10:38.868272   12728 out.go:270] * 
	W0408 23:10:38.869805   12728 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 23:10:38.876775   12728 out.go:201] 
	
	
	==> Docker <==
	Apr 08 23:13:39 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:13:39 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:13:39 functional-618200 dockerd[4937]: time="2025-04-08T23:13:39.647599381Z" level=info msg="Starting up"
	Apr 08 23:14:39 functional-618200 dockerd[4937]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:14:39 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:14:39Z" level=error msg="error getting RW layer size for container ID 'b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:14:39 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:14:39Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc'"
	Apr 08 23:14:39 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:14:39Z" level=error msg="error getting RW layer size for container ID 'bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:14:39 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:14:39Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f'"
	Apr 08 23:14:39 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:14:39Z" level=error msg="error getting RW layer size for container ID 'e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:14:39 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:14:39Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c'"
	Apr 08 23:14:39 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:14:39Z" level=error msg="error getting RW layer size for container ID 'd4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:14:39 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:14:39 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:14:39 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:14:39 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:14:39Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61'"
	Apr 08 23:14:39 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:14:39Z" level=error msg="error getting RW layer size for container ID 'a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:14:39 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:14:39Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa'"
	Apr 08 23:14:39 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:14:39Z" level=error msg="error getting RW layer size for container ID 'd1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:14:39 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:14:39Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee'"
	Apr 08 23:14:39 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:14:39Z" level=error msg="error getting RW layer size for container ID '48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:14:39 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:14:39Z" level=error msg="Set backoffDuration to : 1m0s for container ID '48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245'"
	Apr 08 23:14:39 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:14:39Z" level=error msg="error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Apr 08 23:14:39 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 5.
	Apr 08 23:14:39 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:14:39 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2025-04-08T23:14:42Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr 8 23:07] systemd-fstab-generator[1018]: Ignoring "noauto" option for root device
	[  +0.094636] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.537665] systemd-fstab-generator[1057]: Ignoring "noauto" option for root device
	[  +0.198814] systemd-fstab-generator[1069]: Ignoring "noauto" option for root device
	[  +0.229826] systemd-fstab-generator[1083]: Ignoring "noauto" option for root device
	[  +2.846583] systemd-fstab-generator[1309]: Ignoring "noauto" option for root device
	[  +0.173620] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	[  +0.175758] systemd-fstab-generator[1333]: Ignoring "noauto" option for root device
	[  +0.246052] systemd-fstab-generator[1348]: Ignoring "noauto" option for root device
	[  +8.663048] systemd-fstab-generator[1449]: Ignoring "noauto" option for root device
	[  +0.103326] kauditd_printk_skb: 206 callbacks suppressed
	[  +5.045655] kauditd_printk_skb: 24 callbacks suppressed
	[  +0.759487] systemd-fstab-generator[1705]: Ignoring "noauto" option for root device
	[  +6.800944] systemd-fstab-generator[1860]: Ignoring "noauto" option for root device
	[  +0.086630] kauditd_printk_skb: 40 callbacks suppressed
	[  +8.016757] systemd-fstab-generator[2285]: Ignoring "noauto" option for root device
	[  +0.140038] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.396453] systemd-fstab-generator[2387]: Ignoring "noauto" option for root device
	[  +0.210902] kauditd_printk_skb: 12 callbacks suppressed
	[Apr 8 23:08] kauditd_printk_skb: 71 callbacks suppressed
	[Apr 8 23:09] systemd-fstab-generator[3506]: Ignoring "noauto" option for root device
	[  +0.614168] systemd-fstab-generator[3549]: Ignoring "noauto" option for root device
	[  +0.260567] systemd-fstab-generator[3561]: Ignoring "noauto" option for root device
	[  +0.277633] systemd-fstab-generator[3575]: Ignoring "noauto" option for root device
	[  +5.335755] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 23:15:40 up 9 min,  0 users,  load average: 0.16, 0.12, 0.09
	Linux functional-618200 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 08 23:15:37 functional-618200 kubelet[2292]: E0408 23:15:37.434384    2292 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events\": dial tcp 192.168.113.37:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-scheduler-functional-618200.18347a9c9eb7af48  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-functional-618200,UID:2d86200df590720b9ed4835cb131ef10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-618200,},FirstTimestamp:2025-04-08 23:09:28.795549512 +0000 UTC m=+95.026730501,LastTimestamp:2025-04-08 23:09:28.795549512 +0000 UTC m=+95.026730501,C
ount:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-618200,}"
	Apr 08 23:15:40 functional-618200 kubelet[2292]: E0408 23:15:40.021128    2292 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 08 23:15:40 functional-618200 kubelet[2292]: E0408 23:15:40.021219    2292 kuberuntime_sandbox.go:305] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:15:40 functional-618200 kubelet[2292]: E0408 23:15:40.021247    2292 generic.go:256] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:15:40 functional-618200 kubelet[2292]: E0408 23:15:40.021286    2292 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 08 23:15:40 functional-618200 kubelet[2292]: E0408 23:15:40.021338    2292 container_log_manager.go:197] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:15:40 functional-618200 kubelet[2292]: E0408 23:15:40.021625    2292 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 08 23:15:40 functional-618200 kubelet[2292]: E0408 23:15:40.021657    2292 kuberuntime_container.go:508] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:15:40 functional-618200 kubelet[2292]: E0408 23:15:40.021920    2292 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 08 23:15:40 functional-618200 kubelet[2292]: E0408 23:15:40.021947    2292 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:15:40 functional-618200 kubelet[2292]: I0408 23:15:40.021958    2292 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:15:40 functional-618200 kubelet[2292]: E0408 23:15:40.021981    2292 log.go:32] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:15:40 functional-618200 kubelet[2292]: E0408 23:15:40.022019    2292 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:15:40 functional-618200 kubelet[2292]: E0408 23:15:40.025952    2292 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 08 23:15:40 functional-618200 kubelet[2292]: E0408 23:15:40.026045    2292 kuberuntime_container.go:508] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Apr 08 23:15:40 functional-618200 kubelet[2292]: E0408 23:15:40.026220    2292 kubelet.go:1529] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Apr 08 23:15:40 functional-618200 kubelet[2292]: E0408 23:15:40.174209    2292 log.go:32] "Version from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker version from dockerd: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @->/run/docker.sock: read: connection reset by peer"
	Apr 08 23:15:40 functional-618200 kubelet[2292]: I0408 23:15:40.174526    2292 setters.go:602] "Node became not ready" node="functional-618200" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-04-08T23:15:40Z","lastTransitionTime":"2025-04-08T23:15:40Z","reason":"KubeletNotReady","message":"[container runtime is down, PLEG is not healthy: pleg was last seen active 6m13.164449596s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @-\u003e/run/docker.sock: read: connection reset by peer]"}
	Apr 08 23:15:40 functional-618200 kubelet[2292]: E0408 23:15:40.178450    2292 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 6m13.168451534s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @->/run/docker.sock: read: connection reset by peer]"
	Apr 08 23:15:40 functional-618200 kubelet[2292]: E0408 23:15:40.185604    2292 kubelet_node_status.go:549] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-04-08T23:15:40Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-04-08T23:15:40Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-04-08T23:15:40Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-04-08T23:15:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-04-08T23:15:40Z\\\",\\\"message\\\":\\\"[container runtime is down, PLEG is not healthy: pleg was last seen active 6m13.164449596s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to g
et docker version: failed to get docker version from dockerd: error during connect: Get \\\\\\\"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\\\\\\\": read unix @-\\\\u003e/run/docker.sock: read: connection reset by peer]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"containerRuntimeVersion\\\":\\\"docker://Unknown\\\"}}}\" for node \"functional-618200\": Patch \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-618200/status?timeout=10s\": dial tcp 192.168.113.37:8441: connect: connection refused"
	Apr 08 23:15:40 functional-618200 kubelet[2292]: E0408 23:15:40.187655    2292 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"functional-618200\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-618200?timeout=10s\": dial tcp 192.168.113.37:8441: connect: connection refused"
	Apr 08 23:15:40 functional-618200 kubelet[2292]: E0408 23:15:40.188639    2292 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"functional-618200\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-618200?timeout=10s\": dial tcp 192.168.113.37:8441: connect: connection refused"
	Apr 08 23:15:40 functional-618200 kubelet[2292]: E0408 23:15:40.189529    2292 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"functional-618200\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-618200?timeout=10s\": dial tcp 192.168.113.37:8441: connect: connection refused"
	Apr 08 23:15:40 functional-618200 kubelet[2292]: E0408 23:15:40.190966    2292 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"functional-618200\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-618200?timeout=10s\": dial tcp 192.168.113.37:8441: connect: connection refused"
	Apr 08 23:15:40 functional-618200 kubelet[2292]: E0408 23:15:40.191011    2292 kubelet_node_status.go:536] "Unable to update node status" err="update node status exceeds retry count"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0408 23:14:39.658241    4524 logs.go:279] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:14:39.690380    4524 logs.go:279] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:14:39.723516    4524 logs.go:279] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:14:39.757567    4524 logs.go:279] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:14:39.791056    4524 logs.go:279] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:14:39.824727    4524 logs.go:279] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:14:39.856405    4524 logs.go:279] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:14:39.886571    4524 logs.go:279] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-618200 -n functional-618200
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-618200 -n functional-618200: exit status 2 (11.8015017s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-618200" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (120.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (11.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-618200 ssh sudo crictl images
functional_test.go:1141: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-618200 ssh sudo crictl images: exit status 1 (11.1334688s)

                                                
                                                
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1143: failed to get images by "out/minikube-windows-amd64.exe -p functional-618200 ssh sudo crictl images" ssh exit status 1
functional_test.go:1147: expected sha for pause:3.3 "0184c1613d929" to be in the output but got *
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr ***
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (11.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (179.8s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-618200 ssh sudo docker rmi registry.k8s.io/pause:latest
E0408 23:23:10.452514    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:1164: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-618200 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 1 (48.0907883s)

                                                
                                                
-- stdout --
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1167: failed to manually delete image "out/minikube-windows-amd64.exe -p functional-618200 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 1
functional_test.go:1170: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-618200 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-618200 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (11.2468307s)

                                                
                                                
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-618200 cache reload
functional_test.go:1175: (dbg) Done: out/minikube-windows-amd64.exe -p functional-618200 cache reload: (1m49.2752088s)
functional_test.go:1180: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-618200 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1180: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-618200 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (11.1837748s)

                                                
                                                
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1182: expected "out/minikube-windows-amd64.exe -p functional-618200 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 1
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (179.80s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (180.47s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-618200 kubectl -- --context functional-618200 get pods
functional_test.go:733: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-618200 kubectl -- --context functional-618200 get pods: exit status 1 (10.7205277s)

                                                
                                                
** stderr ** 
	E0408 23:28:58.329030    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.113.37:8441/api?timeout=32s\": dial tcp 192.168.113.37:8441: connectex: No connection could be made because the target machine actively refused it."
	E0408 23:29:00.468156    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.113.37:8441/api?timeout=32s\": dial tcp 192.168.113.37:8441: connectex: No connection could be made because the target machine actively refused it."
	E0408 23:29:02.493475    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.113.37:8441/api?timeout=32s\": dial tcp 192.168.113.37:8441: connectex: No connection could be made because the target machine actively refused it."
	E0408 23:29:04.527236    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.113.37:8441/api?timeout=32s\": dial tcp 192.168.113.37:8441: connectex: No connection could be made because the target machine actively refused it."
	E0408 23:29:06.561904    9728 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.113.37:8441/api?timeout=32s\": dial tcp 192.168.113.37:8441: connectex: No connection could be made because the target machine actively refused it."
	Unable to connect to the server: dial tcp 192.168.113.37:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:736: failed to get pods. args "out/minikube-windows-amd64.exe -p functional-618200 kubectl -- --context functional-618200 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-618200 -n functional-618200
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-618200 -n functional-618200: exit status 2 (11.7424713s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-618200 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-618200 logs -n 25: (2m25.8595666s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-268300 --log_dir                                     | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:02 UTC | 08 Apr 25 23:03 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-268300 --log_dir                                     | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:03 UTC | 08 Apr 25 23:03 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-268300 --log_dir                                     | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:03 UTC | 08 Apr 25 23:03 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-268300 --log_dir                                     | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:03 UTC | 08 Apr 25 23:03 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-268300 --log_dir                                     | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:03 UTC | 08 Apr 25 23:04 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-268300 --log_dir                                     | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:04 UTC | 08 Apr 25 23:04 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-268300 --log_dir                                     | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:04 UTC | 08 Apr 25 23:04 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-268300                                            | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:04 UTC | 08 Apr 25 23:04 UTC |
	| start   | -p functional-618200                                        | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:04 UTC | 08 Apr 25 23:08 UTC |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |         |                     |                     |
	| start   | -p functional-618200                                        | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:08 UTC |                     |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-618200 cache add                                 | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:15 UTC | 08 Apr 25 23:17 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-618200 cache add                                 | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:17 UTC | 08 Apr 25 23:19 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-618200 cache add                                 | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:19 UTC | 08 Apr 25 23:21 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-618200 cache add                                 | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:21 UTC | 08 Apr 25 23:22 UTC |
	|         | minikube-local-cache-test:functional-618200                 |                   |                   |         |                     |                     |
	| cache   | functional-618200 cache delete                              | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:22 UTC | 08 Apr 25 23:22 UTC |
	|         | minikube-local-cache-test:functional-618200                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:22 UTC | 08 Apr 25 23:22 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:22 UTC | 08 Apr 25 23:22 UTC |
	| ssh     | functional-618200 ssh sudo                                  | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:22 UTC |                     |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-618200                                           | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:22 UTC |                     |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-618200 ssh                                       | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:23 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-618200 cache reload                              | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:23 UTC | 08 Apr 25 23:25 UTC |
	| ssh     | functional-618200 ssh                                       | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:25 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:25 UTC | 08 Apr 25 23:25 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:25 UTC | 08 Apr 25 23:25 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-618200 kubectl --                                | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:28 UTC |                     |
	|         | --context functional-618200                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/08 23:08:09
	Running on machine: minikube6
	Binary: Built with gc go1.24.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 23:08:09.246712   12728 out.go:345] Setting OutFile to fd 812 ...
	I0408 23:08:09.325819   12728 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 23:08:09.325819   12728 out.go:358] Setting ErrFile to fd 1352...
	I0408 23:08:09.325819   12728 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 23:08:09.346759   12728 out.go:352] Setting JSON to false
	I0408 23:08:09.349936   12728 start.go:129] hostinfo: {"hostname":"minikube6","uptime":10687,"bootTime":1744143002,"procs":176,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5679 Build 19045.5679","kernelVersion":"10.0.19045.5679 Build 19045.5679","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0408 23:08:09.349936   12728 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 23:08:09.354680   12728 out.go:177] * [functional-618200] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
	I0408 23:08:09.360335   12728 notify.go:220] Checking for updates...
	I0408 23:08:09.363251   12728 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0408 23:08:09.365934   12728 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 23:08:09.370015   12728 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0408 23:08:09.372261   12728 out.go:177]   - MINIKUBE_LOCATION=20501
	I0408 23:08:09.376217   12728 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 23:08:09.380199   12728 config.go:182] Loaded profile config "functional-618200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0408 23:08:09.380595   12728 driver.go:404] Setting default libvirt URI to qemu:///system
	I0408 23:08:14.781214   12728 out.go:177] * Using the hyperv driver based on existing profile
	I0408 23:08:14.787195   12728 start.go:297] selected driver: hyperv
	I0408 23:08:14.787195   12728 start.go:901] validating driver "hyperv" against &{Name:functional-618200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 Clust
erName:functional-618200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.113.37 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 23:08:14.788108   12728 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 23:08:14.840719   12728 cni.go:84] Creating CNI manager for ""
	I0408 23:08:14.840719   12728 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 23:08:14.840719   12728 start.go:340] cluster config:
	{Name:functional-618200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-618200 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.113.37 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 23:08:14.840719   12728 iso.go:125] acquiring lock: {Name:mk49322cc4182124f5e9cd1631076166a7ff463d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 23:08:14.844868   12728 out.go:177] * Starting "functional-618200" primary control-plane node in "functional-618200" cluster
	I0408 23:08:14.847279   12728 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0408 23:08:14.847279   12728 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
	I0408 23:08:14.847279   12728 cache.go:56] Caching tarball of preloaded images
	I0408 23:08:14.847279   12728 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0408 23:08:14.847279   12728 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0408 23:08:14.848442   12728 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-618200\config.json ...
	I0408 23:08:14.850635   12728 start.go:360] acquireMachinesLock for functional-618200: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 23:08:14.850635   12728 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-618200"
	I0408 23:08:14.851114   12728 start.go:96] Skipping create...Using existing machine configuration
	I0408 23:08:14.851183   12728 fix.go:54] fixHost starting: 
	I0408 23:08:14.851361   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:17.635558   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:17.636077   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:17.636077   12728 fix.go:112] recreateIfNeeded on functional-618200: state=Running err=<nil>
	W0408 23:08:17.636077   12728 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 23:08:17.641199   12728 out.go:177] * Updating the running hyperv "functional-618200" VM ...
	I0408 23:08:17.643270   12728 machine.go:93] provisionDockerMachine start ...
	I0408 23:08:17.643828   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:19.832353   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:19.832353   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:19.833486   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:08:22.348787   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:08:22.348787   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:22.354331   12728 main.go:141] libmachine: Using SSH client type: native
	I0408 23:08:22.354942   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1427d00] 0x142a840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:08:22.354942   12728 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 23:08:22.482052   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-618200
	
	I0408 23:08:22.482109   12728 buildroot.go:166] provisioning hostname "functional-618200"
	I0408 23:08:22.482218   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:24.614743   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:24.615199   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:24.615199   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:08:27.116022   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:08:27.116669   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:27.122660   12728 main.go:141] libmachine: Using SSH client type: native
	I0408 23:08:27.122837   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1427d00] 0x142a840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:08:27.122837   12728 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-618200 && echo "functional-618200" | sudo tee /etc/hostname
	I0408 23:08:27.296048   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-618200
	
	I0408 23:08:27.296048   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:29.515938   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:29.516732   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:29.516860   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:08:32.104430   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:08:32.104430   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:32.111087   12728 main.go:141] libmachine: Using SSH client type: native
	I0408 23:08:32.111822   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1427d00] 0x142a840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:08:32.111822   12728 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-618200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-618200/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-618200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 23:08:32.239307   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 23:08:32.239307   12728 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0408 23:08:32.239307   12728 buildroot.go:174] setting up certificates
	I0408 23:08:32.239307   12728 provision.go:84] configureAuth start
	I0408 23:08:32.239907   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:34.375660   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:34.376637   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:34.376637   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:08:36.940152   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:08:36.940811   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:36.940910   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:39.102003   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:39.102003   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:39.102003   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:08:41.651752   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:08:41.651752   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:41.651752   12728 provision.go:143] copyHostCerts
	I0408 23:08:41.652744   12728 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0408 23:08:41.653241   12728 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0408 23:08:41.653241   12728 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0408 23:08:41.653897   12728 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0408 23:08:41.655530   12728 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0408 23:08:41.655919   12728 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0408 23:08:41.655919   12728 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0408 23:08:41.656607   12728 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0408 23:08:41.657919   12728 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0408 23:08:41.658240   12728 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0408 23:08:41.658370   12728 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0408 23:08:41.658791   12728 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0408 23:08:41.659993   12728 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-618200 san=[127.0.0.1 192.168.113.37 functional-618200 localhost minikube]
	I0408 23:08:41.724180   12728 provision.go:177] copyRemoteCerts
	I0408 23:08:41.734528   12728 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 23:08:41.734661   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:43.857555   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:43.858453   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:43.858453   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:08:46.376433   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:08:46.376433   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:46.376862   12728 sshutil.go:53] new ssh client: &{IP:192.168.113.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-618200\id_rsa Username:docker}
	I0408 23:08:46.479933   12728 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7452489s)
	I0408 23:08:46.479933   12728 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0408 23:08:46.480251   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0408 23:08:46.526275   12728 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0408 23:08:46.526275   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0408 23:08:46.571513   12728 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0408 23:08:46.571513   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 23:08:46.618636   12728 provision.go:87] duration metric: took 14.3791442s to configureAuth
	I0408 23:08:46.618636   12728 buildroot.go:189] setting minikube options for container-runtime
	I0408 23:08:46.619360   12728 config.go:182] Loaded profile config "functional-618200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0408 23:08:46.619360   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:48.759145   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:48.759997   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:48.760072   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:08:51.352431   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:08:51.352840   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:51.358422   12728 main.go:141] libmachine: Using SSH client type: native
	I0408 23:08:51.359181   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1427d00] 0x142a840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:08:51.359181   12728 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0408 23:08:51.498239   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0408 23:08:51.498239   12728 buildroot.go:70] root file system type: tmpfs
	I0408 23:08:51.499500   12728 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0408 23:08:51.499565   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:53.639609   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:53.639609   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:53.639706   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:08:56.165286   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:08:56.165286   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:56.172269   12728 main.go:141] libmachine: Using SSH client type: native
	I0408 23:08:56.172483   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1427d00] 0x142a840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:08:56.172483   12728 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0408 23:08:56.329047   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0408 23:08:56.329209   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:58.408221   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:58.408271   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:58.408271   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:09:00.972449   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:09:00.972449   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:00.978298   12728 main.go:141] libmachine: Using SSH client type: native
	I0408 23:09:00.979066   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1427d00] 0x142a840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:09:00.979150   12728 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0408 23:09:01.120743   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 23:09:01.120743   12728 machine.go:96] duration metric: took 43.4763536s to provisionDockerMachine
	I0408 23:09:01.120743   12728 start.go:293] postStartSetup for "functional-618200" (driver="hyperv")
	I0408 23:09:01.120743   12728 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 23:09:01.134465   12728 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 23:09:01.134586   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:09:03.239597   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:09:03.239597   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:03.240300   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:09:05.769173   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:09:05.769791   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:05.769977   12728 sshutil.go:53] new ssh client: &{IP:192.168.113.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-618200\id_rsa Username:docker}
	I0408 23:09:05.882717   12728 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7480703s)
	I0408 23:09:05.895357   12728 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 23:09:05.906701   12728 command_runner.go:130] > NAME=Buildroot
	I0408 23:09:05.906871   12728 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0408 23:09:05.906871   12728 command_runner.go:130] > ID=buildroot
	I0408 23:09:05.906871   12728 command_runner.go:130] > VERSION_ID=2023.02.9
	I0408 23:09:05.906871   12728 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0408 23:09:05.906871   12728 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 23:09:05.906871   12728 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0408 23:09:05.907746   12728 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0408 23:09:05.909230   12728 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> 98642.pem in /etc/ssl/certs
	I0408 23:09:05.909297   12728 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> /etc/ssl/certs/98642.pem
	I0408 23:09:05.909974   12728 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9864\hosts -> hosts in /etc/test/nested/copy/9864
	I0408 23:09:05.909974   12728 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9864\hosts -> /etc/test/nested/copy/9864/hosts
	I0408 23:09:05.922022   12728 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/9864
	I0408 23:09:05.940207   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem --> /etc/ssl/certs/98642.pem (1708 bytes)
	I0408 23:09:05.986656   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9864\hosts --> /etc/test/nested/copy/9864/hosts (40 bytes)
	I0408 23:09:06.037448   12728 start.go:296] duration metric: took 4.9164478s for postStartSetup
	I0408 23:09:06.037545   12728 fix.go:56] duration metric: took 51.1857011s for fixHost
	I0408 23:09:06.037624   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:09:08.158094   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:09:08.158094   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:08.158094   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:09:10.681527   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:09:10.681527   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:10.688411   12728 main.go:141] libmachine: Using SSH client type: native
	I0408 23:09:10.689102   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1427d00] 0x142a840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:09:10.689245   12728 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0408 23:09:10.829582   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744153750.860325411
	
	I0408 23:09:10.829582   12728 fix.go:216] guest clock: 1744153750.860325411
	I0408 23:09:10.829683   12728 fix.go:229] Guest: 2025-04-08 23:09:10.860325411 +0000 UTC Remote: 2025-04-08 23:09:06.0375451 +0000 UTC m=+56.890513901 (delta=4.822780311s)
	I0408 23:09:10.829858   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:09:12.957017   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:09:12.957017   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:12.957017   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:09:15.521412   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:09:15.521412   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:15.527916   12728 main.go:141] libmachine: Using SSH client type: native
	I0408 23:09:15.528634   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1427d00] 0x142a840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:09:15.528634   12728 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1744153750
	I0408 23:09:15.671072   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr  8 23:09:10 UTC 2025
	
	I0408 23:09:15.671072   12728 fix.go:236] clock set: Tue Apr  8 23:09:10 UTC 2025
	 (err=<nil>)
	I0408 23:09:15.671072   12728 start.go:83] releasing machines lock for "functional-618200", held for 1m0.8196519s
	I0408 23:09:15.671072   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:09:17.795924   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:09:17.795924   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:17.795924   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:09:20.343976   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:09:20.344152   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:20.347691   12728 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0408 23:09:20.347691   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:09:20.358515   12728 ssh_runner.go:195] Run: cat /version.json
	I0408 23:09:20.358515   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:09:22.544260   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:09:22.544260   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:22.544260   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:09:22.547450   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:09:22.547450   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:22.547565   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:09:25.306292   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:09:25.306292   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:25.306292   12728 sshutil.go:53] new ssh client: &{IP:192.168.113.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-618200\id_rsa Username:docker}
	I0408 23:09:25.329784   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:09:25.330858   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:25.330972   12728 sshutil.go:53] new ssh client: &{IP:192.168.113.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-618200\id_rsa Username:docker}
	I0408 23:09:25.407167   12728 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0408 23:09:25.407167   12728 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.0594111s)
	W0408 23:09:25.407380   12728 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0408 23:09:25.427823   12728 command_runner.go:130] > {"iso_version": "v1.35.0", "kicbase_version": "v0.0.45-1736763277-20236", "minikube_version": "v1.35.0", "commit": "3fb24bd87c8c8761e2515e1a9ee13835a389ed68"}
	I0408 23:09:25.427823   12728 ssh_runner.go:235] Completed: cat /version.json: (5.0692422s)
	I0408 23:09:25.441651   12728 ssh_runner.go:195] Run: systemctl --version
	I0408 23:09:25.452009   12728 command_runner.go:130] > systemd 252 (252)
	I0408 23:09:25.452009   12728 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0408 23:09:25.462226   12728 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0408 23:09:25.470182   12728 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0408 23:09:25.470647   12728 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 23:09:25.483329   12728 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 23:09:25.504611   12728 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0408 23:09:25.504611   12728 start.go:495] detecting cgroup driver to use...
	I0408 23:09:25.505055   12728 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0408 23:09:25.518103   12728 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0408 23:09:25.518165   12728 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0408 23:09:25.545691   12728 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0408 23:09:25.557677   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0408 23:09:25.585837   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0408 23:09:25.605727   12728 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0408 23:09:25.616269   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0408 23:09:25.648654   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 23:09:25.682043   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0408 23:09:25.712502   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 23:09:25.745703   12728 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 23:09:25.776089   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0408 23:09:25.813738   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0408 23:09:25.847440   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0408 23:09:25.878964   12728 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 23:09:25.897917   12728 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0408 23:09:25.910039   12728 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 23:09:25.937635   12728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:09:26.191579   12728 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0408 23:09:26.223263   12728 start.go:495] detecting cgroup driver to use...
	I0408 23:09:26.235750   12728 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0408 23:09:26.260048   12728 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0408 23:09:26.260125   12728 command_runner.go:130] > [Unit]
	I0408 23:09:26.260125   12728 command_runner.go:130] > Description=Docker Application Container Engine
	I0408 23:09:26.260125   12728 command_runner.go:130] > Documentation=https://docs.docker.com
	I0408 23:09:26.260200   12728 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0408 23:09:26.260200   12728 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0408 23:09:26.260200   12728 command_runner.go:130] > StartLimitBurst=3
	I0408 23:09:26.260200   12728 command_runner.go:130] > StartLimitIntervalSec=60
	I0408 23:09:26.260200   12728 command_runner.go:130] > [Service]
	I0408 23:09:26.260200   12728 command_runner.go:130] > Type=notify
	I0408 23:09:26.260200   12728 command_runner.go:130] > Restart=on-failure
	I0408 23:09:26.260338   12728 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0408 23:09:26.260338   12728 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0408 23:09:26.260338   12728 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0408 23:09:26.260338   12728 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0408 23:09:26.260338   12728 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0408 23:09:26.260472   12728 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0408 23:09:26.260472   12728 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0408 23:09:26.260472   12728 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0408 23:09:26.260472   12728 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0408 23:09:26.260472   12728 command_runner.go:130] > ExecStart=
	I0408 23:09:26.260472   12728 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0408 23:09:26.260581   12728 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0408 23:09:26.260581   12728 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0408 23:09:26.260581   12728 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0408 23:09:26.260581   12728 command_runner.go:130] > LimitNOFILE=infinity
	I0408 23:09:26.260678   12728 command_runner.go:130] > LimitNPROC=infinity
	I0408 23:09:26.260707   12728 command_runner.go:130] > LimitCORE=infinity
	I0408 23:09:26.260707   12728 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0408 23:09:26.260707   12728 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0408 23:09:26.260764   12728 command_runner.go:130] > TasksMax=infinity
	I0408 23:09:26.260764   12728 command_runner.go:130] > TimeoutStartSec=0
	I0408 23:09:26.260764   12728 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0408 23:09:26.260764   12728 command_runner.go:130] > Delegate=yes
	I0408 23:09:26.260802   12728 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0408 23:09:26.260802   12728 command_runner.go:130] > KillMode=process
	I0408 23:09:26.260847   12728 command_runner.go:130] > [Install]
	I0408 23:09:26.260847   12728 command_runner.go:130] > WantedBy=multi-user.target
	I0408 23:09:26.272013   12728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 23:09:26.309047   12728 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 23:09:26.364238   12728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 23:09:26.397809   12728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0408 23:09:26.420470   12728 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 23:09:26.452776   12728 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0408 23:09:26.465171   12728 ssh_runner.go:195] Run: which cri-dockerd
	I0408 23:09:26.471612   12728 command_runner.go:130] > /usr/bin/cri-dockerd
	I0408 23:09:26.483601   12728 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0408 23:09:26.500243   12728 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0408 23:09:26.541951   12728 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0408 23:09:26.818543   12728 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0408 23:09:27.059393   12728 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0408 23:09:27.059393   12728 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0408 23:09:27.105693   12728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:09:27.332438   12728 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0408 23:10:38.780025   12728 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0408 23:10:38.780100   12728 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0408 23:10:38.783775   12728 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.4502693s)
	I0408 23:10:38.797107   12728 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0408 23:10:38.826638   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	I0408 23:10:38.826758   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.094333857Z" level=info msg="Starting up"
	I0408 23:10:38.826758   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.095749501Z" level=info msg="containerd not running, starting managed containerd"
	I0408 23:10:38.826815   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.097506580Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=673
	I0408 23:10:38.826815   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.128963677Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0408 23:10:38.826815   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152469766Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0408 23:10:38.826815   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152558876Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0408 23:10:38.826815   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152717392Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0408 23:10:38.826815   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152739794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.827006   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152812201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.827006   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152901110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.827074   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153079328Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.827097   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153169038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.827157   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153187739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.827181   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153197940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.827181   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153293950Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.827260   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153812303Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.827260   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156561482Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.827340   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156716198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156848512Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156952822Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.157044531Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.157169744Z" level=info msg="metadata content store policy set" policy=shared
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190389421Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190521734Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190544737Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190560338Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190576740Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190838067Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191154799Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191361820Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191472031Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191493633Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191512135Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191527737Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191541238Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191555639Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191571341Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191603144Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.827985   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191615846Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.828081   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191628447Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.828188   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191749659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828234   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191774162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828234   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191800364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828234   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191815666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828308   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191830867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828308   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191844669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828356   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191857670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828356   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191870171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828426   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191882273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828426   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191897274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828489   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191908775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828524   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191920677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828613   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191932778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828649   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191947379Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0408 23:10:38.828649   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191967081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828790   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191979383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828855   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191992484Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0408 23:10:38.828855   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192114796Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0408 23:10:38.828935   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192196605Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192262611Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192291214Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192304416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192318917Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192331918Z" level=info msg="NRI interface is disabled by configuration."
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193151202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193285015Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193371424Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193820570Z" level=info msg="containerd successfully booted in 0.066941s"
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.170474987Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.203429127Z" level=info msg="Loading containers: start."
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.350665658Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.583414712Z" level=info msg="Loading containers: done."
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.608611503Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.608776419Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.609056647Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.609260067Z" level=info msg="Daemon has completed initialization"
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.713909013Z" level=info msg="API listen on /var/run/docker.sock"
	I0408 23:10:38.829565   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.714066029Z" level=info msg="API listen on [::]:2376"
	I0408 23:10:38.829565   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 systemd[1]: Started Docker Application Container Engine.
	I0408 23:10:38.829625   12728 command_runner.go:130] > Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.811241096Z" level=info msg="Processing signal 'terminated'"
	I0408 23:10:38.829625   12728 command_runner.go:130] > Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813084503Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0408 23:10:38.829625   12728 command_runner.go:130] > Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813257403Z" level=info msg="Daemon shutdown complete"
	I0408 23:10:38.829625   12728 command_runner.go:130] > Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813288003Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0408 23:10:38.829753   12728 command_runner.go:130] > Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813374004Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0408 23:10:38.829753   12728 command_runner.go:130] > Apr 08 23:07:20 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	I0408 23:10:38.829790   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	I0408 23:10:38.829942   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	I0408 23:10:38.829942   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	I0408 23:10:38.829942   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.861204748Z" level=info msg="Starting up"
	I0408 23:10:38.830042   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.863521556Z" level=info msg="containerd not running, starting managed containerd"
	I0408 23:10:38.830042   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.864856161Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1097
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.891008554Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913514335Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913559535Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913591835Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913605435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913626835Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913637435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913748735Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913963436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913985636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913996836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.914019636Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.914159537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.916995847Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917087147Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917210048Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917295148Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0408 23:10:38.830797   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917328148Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0408 23:10:38.830797   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917346448Z" level=info msg="metadata content store policy set" policy=shared
	I0408 23:10:38.830797   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917634649Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0408 23:10:38.830869   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917741950Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0408 23:10:38.830869   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917760750Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0408 23:10:38.830869   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917900050Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0408 23:10:38.830869   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917914850Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0408 23:10:38.830869   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917957150Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918196151Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918327452Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918413452Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918430852Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918442352Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918453152Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918462452Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918473352Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918484552Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918499152Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.831194   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918509952Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.831194   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918520052Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.831194   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918543853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831194   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918558553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831300   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918568953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831300   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918579553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831300   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918589553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831377   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918609253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831377   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918626253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831422   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918638253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831442   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918657853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831442   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918673253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831442   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918682953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918692253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918702953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918715553Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918733953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918744753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918754653Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918959554Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919161355Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919325455Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919361655Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919372055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919407356Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919416356Z" level=info msg="NRI interface is disabled by configuration."
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919735157Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919968758Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.920117658Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.920171758Z" level=info msg="containerd successfully booted in 0.029982s"
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:22 functional-618200 dockerd[1091]: time="2025-04-08T23:07:22.908709690Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:22 functional-618200 dockerd[1091]: time="2025-04-08T23:07:22.934950284Z" level=info msg="Loading containers: start."
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.062615440Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.175164242Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.282062124Z" level=info msg="Loading containers: done."
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.305666909Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.305777709Z" level=info msg="Daemon has completed initialization"
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.341856738Z" level=info msg="API listen on /var/run/docker.sock"
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:23 functional-618200 systemd[1]: Started Docker Application Container Engine.
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.343491744Z" level=info msg="API listen on [::]:2376"
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:32 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.905143108Z" level=info msg="Processing signal 'terminated'"
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906371813Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906906114Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.907033815Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906918515Z" level=info msg="Daemon shutdown complete"
	I0408 23:10:38.832201   12728 command_runner.go:130] > Apr 08 23:07:33 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	I0408 23:10:38.832201   12728 command_runner.go:130] > Apr 08 23:07:33 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	I0408 23:10:38.832201   12728 command_runner.go:130] > Apr 08 23:07:33 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	I0408 23:10:38.832201   12728 command_runner.go:130] > Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.955484761Z" level=info msg="Starting up"
	I0408 23:10:38.832201   12728 command_runner.go:130] > Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.957042767Z" level=info msg="containerd not running, starting managed containerd"
	I0408 23:10:38.832402   12728 command_runner.go:130] > Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.958462672Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1462
	I0408 23:10:38.832402   12728 command_runner.go:130] > Apr 08 23:07:33 functional-618200 dockerd[1462]: time="2025-04-08T23:07:33.983507761Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0408 23:10:38.832440   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009132353Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0408 23:10:38.832440   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009242353Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0408 23:10:38.832490   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009307753Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0408 23:10:38.832524   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009324953Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.832524   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009354454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.832569   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009383954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.832619   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009545254Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.832619   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009658655Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.832619   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009680555Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.832619   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009691855Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.832619   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009717555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.832745   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.010024356Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.832794   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012580665Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.832826   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012671765Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.832878   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012945166Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.832917   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013039867Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0408 23:10:38.832917   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013070567Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0408 23:10:38.832917   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013104967Z" level=info msg="metadata content store policy set" policy=shared
	I0408 23:10:38.832975   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013460968Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0408 23:10:38.832996   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013562869Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013583269Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013598369Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013611569Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013659269Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014010570Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014156471Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014247371Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014266571Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014280071Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014397172Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014425272Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014441672Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014458272Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014472772Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014498972Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014515572Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014537972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014555672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014570972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833567   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014585972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833567   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014601072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014615672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014629372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014643572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014658573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014679173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014709673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014738473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014783273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014916873Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014942274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014955574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014969174Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015051774Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015092874Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015107074Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015122374Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015133174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015147174Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015158874Z" level=info msg="NRI interface is disabled by configuration."
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015573476Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015638476Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015690176Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015715476Z" level=info msg="containerd successfully booted in 0.033079s"
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:35 functional-618200 dockerd[1456]: time="2025-04-08T23:07:35.262471031Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:37 functional-618200 dockerd[1456]: time="2025-04-08T23:07:37.762713164Z" level=info msg="Loading containers: start."
	I0408 23:10:38.834237   12728 command_runner.go:130] > Apr 08 23:07:37 functional-618200 dockerd[1456]: time="2025-04-08T23:07:37.897446846Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0408 23:10:38.834237   12728 command_runner.go:130] > Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.015338367Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0408 23:10:38.834237   12728 command_runner.go:130] > Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.153824862Z" level=info msg="Loading containers: done."
	I0408 23:10:38.834237   12728 command_runner.go:130] > Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.182692065Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0408 23:10:38.834237   12728 command_runner.go:130] > Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.182937366Z" level=info msg="Daemon has completed initialization"
	I0408 23:10:38.834237   12728 command_runner.go:130] > Apr 08 23:07:38 functional-618200 systemd[1]: Started Docker Application Container Engine.
	I0408 23:10:38.834237   12728 command_runner.go:130] > Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.220981402Z" level=info msg="API listen on /var/run/docker.sock"
	I0408 23:10:38.834375   12728 command_runner.go:130] > Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.221045402Z" level=info msg="API listen on [::]:2376"
	I0408 23:10:38.834375   12728 command_runner.go:130] > Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928174323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.834375   12728 command_runner.go:130] > Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928255628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.834443   12728 command_runner.go:130] > Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928274329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834443   12728 command_runner.go:130] > Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928976471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834533   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011163114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011256119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011273420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011437330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.047888267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048098278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048281989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048657110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089143872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089470391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089714404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.090374541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.331240402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.331940241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.332248459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.332901095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587350115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587733437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587951349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.588255466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643351545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643476652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643513354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835183   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643620460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835183   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681369670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.835183   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681570881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.835294   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681658686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835294   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.682028307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835294   12728 command_runner.go:130] > Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.094044455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.835373   12728 command_runner.go:130] > Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.094486867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.835373   12728 command_runner.go:130] > Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.095561595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835373   12728 command_runner.go:130] > Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.097530446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835463   12728 command_runner.go:130] > Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394114311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.835463   12728 command_runner.go:130] > Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394433319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.835567   12728 command_runner.go:130] > Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394665025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835567   12728 command_runner.go:130] > Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.395349443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835567   12728 command_runner.go:130] > Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643182806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.835567   12728 command_runner.go:130] > Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643370211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.835723   12728 command_runner.go:130] > Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643392711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835723   12728 command_runner.go:130] > Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.645053352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835723   12728 command_runner.go:130] > Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216296816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.835827   12728 command_runner.go:130] > Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216387017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.835827   12728 command_runner.go:130] > Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216402117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835887   12728 command_runner.go:130] > Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216977424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.540620784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.540963288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.541044989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.541180590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.848480641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.850292361Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.850566464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.851150170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.385762643Z" level=info msg="Processing signal 'terminated'"
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574335274Z" level=info msg="shim disconnected" id=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574507675Z" level=warning msg="cleaning up after shim disconnected" id=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574520575Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.575374478Z" level=info msg="ignoring event" container=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.602965785Z" level=info msg="ignoring event" container=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.603895489Z" level=info msg="shim disconnected" id=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.604175090Z" level=warning msg="cleaning up after shim disconnected" id=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.604242890Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614380530Z" level=info msg="shim disconnected" id=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614605231Z" level=warning msg="cleaning up after shim disconnected" id=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614742231Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.620402053Z" level=info msg="ignoring event" container=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.620802455Z" level=info msg="shim disconnected" id=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.621015255Z" level=info msg="ignoring event" container=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.621947059Z" level=info msg="ignoring event" container=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.622304660Z" level=info msg="ignoring event" container=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622827062Z" level=warning msg="cleaning up after shim disconnected" id=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.623203064Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622314560Z" level=info msg="shim disconnected" id=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c namespace=moby
	I0408 23:10:38.836542   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.624293868Z" level=warning msg="cleaning up after shim disconnected" id=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c namespace=moby
	I0408 23:10:38.836542   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.624306868Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.836542   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622381461Z" level=info msg="shim disconnected" id=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c namespace=moby
	I0408 23:10:38.836542   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.631193795Z" level=warning msg="cleaning up after shim disconnected" id=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.631249695Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.667400535Z" level=info msg="ignoring event" container=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.669623644Z" level=info msg="shim disconnected" id=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.672188454Z" level=warning msg="cleaning up after shim disconnected" id=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.672924657Z" level=info msg="ignoring event" container=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.673767960Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.681394990Z" level=info msg="ignoring event" container=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.681607190Z" level=info msg="ignoring event" container=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.681903492Z" level=info msg="shim disconnected" id=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.685272405Z" level=warning msg="cleaning up after shim disconnected" id=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.685407505Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.671723952Z" level=info msg="shim disconnected" id=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.693693137Z" level=warning msg="cleaning up after shim disconnected" id=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.693789338Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697563052Z" level=info msg="shim disconnected" id=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697641053Z" level=warning msg="cleaning up after shim disconnected" id=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697654453Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.725345060Z" level=info msg="ignoring event" container=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.725697262Z" level=info msg="shim disconnected" id=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa namespace=moby
	I0408 23:10:38.837349   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.725980963Z" level=warning msg="cleaning up after shim disconnected" id=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa namespace=moby
	I0408 23:10:38.837349   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.726206964Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.837349   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.734018694Z" level=info msg="ignoring event" container=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.837349   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.736798905Z" level=info msg="shim disconnected" id=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 namespace=moby
	I0408 23:10:38.837581   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.737017505Z" level=warning msg="cleaning up after shim disconnected" id=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 namespace=moby
	I0408 23:10:38.837581   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.737255906Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.837581   12728 command_runner.go:130] > Apr 08 23:09:32 functional-618200 dockerd[1456]: time="2025-04-08T23:09:32.552363388Z" level=info msg="ignoring event" container=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.837653   12728 command_runner.go:130] > Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556138103Z" level=info msg="shim disconnected" id=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c namespace=moby
	I0408 23:10:38.837653   12728 command_runner.go:130] > Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556756905Z" level=warning msg="cleaning up after shim disconnected" id=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c namespace=moby
	I0408 23:10:38.837653   12728 command_runner.go:130] > Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556921006Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.837999   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.565876302Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.643029581Z" level=info msg="ignoring event" container=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.646699056Z" level=info msg="shim disconnected" id=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f namespace=moby
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.647140153Z" level=warning msg="cleaning up after shim disconnected" id=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f namespace=moby
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.647214253Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724363532Z" level=info msg="Daemon shutdown complete"
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724563130Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724658330Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724794029Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:38 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:38 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:38 functional-618200 systemd[1]: docker.service: Consumed 4.925s CPU time.
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:38 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:38 functional-618200 dockerd[3978]: time="2025-04-08T23:09:38.782261701Z" level=info msg="Starting up"
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:10:38 functional-618200 dockerd[3978]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:10:38 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:10:38 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:10:38 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	I0408 23:10:38.863518   12728 out.go:201] 
	W0408 23:10:38.867350   12728 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 08 23:06:49 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.094333857Z" level=info msg="Starting up"
	Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.095749501Z" level=info msg="containerd not running, starting managed containerd"
	Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.097506580Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=673
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.128963677Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152469766Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152558876Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152717392Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152739794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152812201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152901110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153079328Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153169038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153187739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153197940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153293950Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153812303Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156561482Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156716198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156848512Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156952822Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.157044531Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.157169744Z" level=info msg="metadata content store policy set" policy=shared
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190389421Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190521734Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190544737Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190560338Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190576740Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190838067Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191154799Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191361820Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191472031Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191493633Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191512135Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191527737Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191541238Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191555639Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191571341Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191603144Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191615846Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191628447Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191749659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191774162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191800364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191815666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191830867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191844669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191857670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191870171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191882273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191897274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191908775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191920677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191932778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191947379Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191967081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191979383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191992484Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192114796Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192196605Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192262611Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192291214Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192304416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192318917Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192331918Z" level=info msg="NRI interface is disabled by configuration."
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193151202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193285015Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193371424Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193820570Z" level=info msg="containerd successfully booted in 0.066941s"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.170474987Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.203429127Z" level=info msg="Loading containers: start."
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.350665658Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.583414712Z" level=info msg="Loading containers: done."
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.608611503Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.608776419Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.609056647Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.609260067Z" level=info msg="Daemon has completed initialization"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.713909013Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.714066029Z" level=info msg="API listen on [::]:2376"
	Apr 08 23:06:50 functional-618200 systemd[1]: Started Docker Application Container Engine.
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.811241096Z" level=info msg="Processing signal 'terminated'"
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813084503Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813257403Z" level=info msg="Daemon shutdown complete"
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813288003Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813374004Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 08 23:07:20 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	Apr 08 23:07:21 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	Apr 08 23:07:21 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:07:21 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.861204748Z" level=info msg="Starting up"
	Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.863521556Z" level=info msg="containerd not running, starting managed containerd"
	Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.864856161Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1097
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.891008554Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913514335Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913559535Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913591835Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913605435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913626835Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913637435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913748735Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913963436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913985636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913996836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.914019636Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.914159537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.916995847Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917087147Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917210048Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917295148Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917328148Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917346448Z" level=info msg="metadata content store policy set" policy=shared
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917634649Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917741950Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917760750Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917900050Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917914850Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917957150Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918196151Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918327452Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918413452Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918430852Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918442352Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918453152Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918462452Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918473352Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918484552Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918499152Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918509952Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918520052Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918543853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918558553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918568953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918579553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918589553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918609253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918626253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918638253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918657853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918673253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918682953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918692253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918702953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918715553Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918733953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918744753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918754653Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918959554Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919161355Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919325455Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919361655Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919372055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919407356Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919416356Z" level=info msg="NRI interface is disabled by configuration."
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919735157Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919968758Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.920117658Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.920171758Z" level=info msg="containerd successfully booted in 0.029982s"
	Apr 08 23:07:22 functional-618200 dockerd[1091]: time="2025-04-08T23:07:22.908709690Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 08 23:07:22 functional-618200 dockerd[1091]: time="2025-04-08T23:07:22.934950284Z" level=info msg="Loading containers: start."
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.062615440Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.175164242Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.282062124Z" level=info msg="Loading containers: done."
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.305666909Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.305777709Z" level=info msg="Daemon has completed initialization"
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.341856738Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 08 23:07:23 functional-618200 systemd[1]: Started Docker Application Container Engine.
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.343491744Z" level=info msg="API listen on [::]:2376"
	Apr 08 23:07:32 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.905143108Z" level=info msg="Processing signal 'terminated'"
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906371813Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906906114Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.907033815Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906918515Z" level=info msg="Daemon shutdown complete"
	Apr 08 23:07:33 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	Apr 08 23:07:33 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:07:33 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.955484761Z" level=info msg="Starting up"
	Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.957042767Z" level=info msg="containerd not running, starting managed containerd"
	Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.958462672Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1462
	Apr 08 23:07:33 functional-618200 dockerd[1462]: time="2025-04-08T23:07:33.983507761Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009132353Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009242353Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009307753Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009324953Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009354454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009383954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009545254Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009658655Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009680555Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009691855Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009717555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.010024356Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012580665Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012671765Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012945166Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013039867Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013070567Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013104967Z" level=info msg="metadata content store policy set" policy=shared
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013460968Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013562869Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013583269Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013598369Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013611569Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013659269Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014010570Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014156471Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014247371Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014266571Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014280071Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014397172Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014425272Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014441672Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014458272Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014472772Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014498972Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014515572Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014537972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014555672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014570972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014585972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014601072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014615672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014629372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014643572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014658573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014679173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014709673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014738473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014783273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014916873Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014942274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014955574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014969174Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015051774Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015092874Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015107074Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015122374Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015133174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015147174Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015158874Z" level=info msg="NRI interface is disabled by configuration."
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015573476Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015638476Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015690176Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015715476Z" level=info msg="containerd successfully booted in 0.033079s"
	Apr 08 23:07:35 functional-618200 dockerd[1456]: time="2025-04-08T23:07:35.262471031Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 08 23:07:37 functional-618200 dockerd[1456]: time="2025-04-08T23:07:37.762713164Z" level=info msg="Loading containers: start."
	Apr 08 23:07:37 functional-618200 dockerd[1456]: time="2025-04-08T23:07:37.897446846Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.015338367Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.153824862Z" level=info msg="Loading containers: done."
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.182692065Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.182937366Z" level=info msg="Daemon has completed initialization"
	Apr 08 23:07:38 functional-618200 systemd[1]: Started Docker Application Container Engine.
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.220981402Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.221045402Z" level=info msg="API listen on [::]:2376"
	Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928174323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928255628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928274329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928976471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011163114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011256119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011273420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011437330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.047888267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048098278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048281989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048657110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089143872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089470391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089714404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.090374541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.331240402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.331940241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.332248459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.332901095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587350115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587733437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587951349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.588255466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643351545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643476652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643513354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643620460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681369670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681570881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681658686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.682028307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.094044455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.094486867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.095561595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.097530446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394114311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394433319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394665025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.395349443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643182806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643370211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643392711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.645053352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216296816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216387017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216402117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216977424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.540620784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.540963288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.541044989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.541180590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.848480641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.850292361Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.850566464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.851150170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.385762643Z" level=info msg="Processing signal 'terminated'"
	Apr 08 23:09:27 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574335274Z" level=info msg="shim disconnected" id=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574507675Z" level=warning msg="cleaning up after shim disconnected" id=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574520575Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.575374478Z" level=info msg="ignoring event" container=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.602965785Z" level=info msg="ignoring event" container=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.603895489Z" level=info msg="shim disconnected" id=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.604175090Z" level=warning msg="cleaning up after shim disconnected" id=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.604242890Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614380530Z" level=info msg="shim disconnected" id=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614605231Z" level=warning msg="cleaning up after shim disconnected" id=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614742231Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.620402053Z" level=info msg="ignoring event" container=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.620802455Z" level=info msg="shim disconnected" id=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.621015255Z" level=info msg="ignoring event" container=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.621947059Z" level=info msg="ignoring event" container=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.622304660Z" level=info msg="ignoring event" container=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622827062Z" level=warning msg="cleaning up after shim disconnected" id=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.623203064Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622314560Z" level=info msg="shim disconnected" id=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.624293868Z" level=warning msg="cleaning up after shim disconnected" id=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.624306868Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622381461Z" level=info msg="shim disconnected" id=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.631193795Z" level=warning msg="cleaning up after shim disconnected" id=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.631249695Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.667400535Z" level=info msg="ignoring event" container=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.669623644Z" level=info msg="shim disconnected" id=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.672188454Z" level=warning msg="cleaning up after shim disconnected" id=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.672924657Z" level=info msg="ignoring event" container=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.673767960Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.681394990Z" level=info msg="ignoring event" container=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.681607190Z" level=info msg="ignoring event" container=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.681903492Z" level=info msg="shim disconnected" id=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.685272405Z" level=warning msg="cleaning up after shim disconnected" id=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.685407505Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.671723952Z" level=info msg="shim disconnected" id=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.693693137Z" level=warning msg="cleaning up after shim disconnected" id=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.693789338Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697563052Z" level=info msg="shim disconnected" id=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697641053Z" level=warning msg="cleaning up after shim disconnected" id=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697654453Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.725345060Z" level=info msg="ignoring event" container=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.725697262Z" level=info msg="shim disconnected" id=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.725980963Z" level=warning msg="cleaning up after shim disconnected" id=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.726206964Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.734018694Z" level=info msg="ignoring event" container=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.736798905Z" level=info msg="shim disconnected" id=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.737017505Z" level=warning msg="cleaning up after shim disconnected" id=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.737255906Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:32 functional-618200 dockerd[1456]: time="2025-04-08T23:09:32.552363388Z" level=info msg="ignoring event" container=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556138103Z" level=info msg="shim disconnected" id=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c namespace=moby
	Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556756905Z" level=warning msg="cleaning up after shim disconnected" id=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c namespace=moby
	Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556921006Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.565876302Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.643029581Z" level=info msg="ignoring event" container=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.646699056Z" level=info msg="shim disconnected" id=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.647140153Z" level=warning msg="cleaning up after shim disconnected" id=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.647214253Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724363532Z" level=info msg="Daemon shutdown complete"
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724563130Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724658330Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724794029Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 08 23:09:38 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	Apr 08 23:09:38 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:09:38 functional-618200 systemd[1]: docker.service: Consumed 4.925s CPU time.
	Apr 08 23:09:38 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:09:38 functional-618200 dockerd[3978]: time="2025-04-08T23:09:38.782261701Z" level=info msg="Starting up"
	Apr 08 23:10:38 functional-618200 dockerd[3978]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:10:38 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:10:38 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:10:38 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0408 23:10:38.868272   12728 out.go:270] * 
	W0408 23:10:38.869805   12728 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 23:10:38.876775   12728 out.go:201] 
	
	
	==> Docker <==
	Apr 08 23:29:43 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:29:43Z" level=error msg="error getting RW layer size for container ID 'a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:29:43 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:29:43Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa'"
	Apr 08 23:29:43 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 20.
	Apr 08 23:29:43 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:29:43 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:29:43 functional-618200 dockerd[8971]: time="2025-04-08T23:29:43.729262267Z" level=info msg="Starting up"
	Apr 08 23:30:43 functional-618200 dockerd[8971]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:30:43 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:30:43 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:30:43 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:30:43 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:30:43Z" level=error msg="error getting RW layer size for container ID 'bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:30:43 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:30:43Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f'"
	Apr 08 23:30:43 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:30:43Z" level=error msg="error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Apr 08 23:30:43 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:30:43Z" level=error msg="error getting RW layer size for container ID 'd4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:30:43 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:30:43Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61'"
	Apr 08 23:30:43 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:30:43Z" level=error msg="error getting RW layer size for container ID '48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:30:43 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:30:43Z" level=error msg="Set backoffDuration to : 1m0s for container ID '48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245'"
	Apr 08 23:30:43 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:30:43Z" level=error msg="error getting RW layer size for container ID 'b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:30:43 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:30:43Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc'"
	Apr 08 23:30:43 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:30:43Z" level=error msg="error getting RW layer size for container ID 'a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:30:43 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:30:43Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa'"
	Apr 08 23:30:43 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:30:43Z" level=error msg="error getting RW layer size for container ID 'd1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:30:43 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:30:43Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee'"
	Apr 08 23:30:43 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:30:43Z" level=error msg="error getting RW layer size for container ID 'e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:30:43 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:30:43Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c'"
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2025-04-08T23:30:43Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unknown desc = failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr 8 23:07] systemd-fstab-generator[1018]: Ignoring "noauto" option for root device
	[  +0.094636] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.537665] systemd-fstab-generator[1057]: Ignoring "noauto" option for root device
	[  +0.198814] systemd-fstab-generator[1069]: Ignoring "noauto" option for root device
	[  +0.229826] systemd-fstab-generator[1083]: Ignoring "noauto" option for root device
	[  +2.846583] systemd-fstab-generator[1309]: Ignoring "noauto" option for root device
	[  +0.173620] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	[  +0.175758] systemd-fstab-generator[1333]: Ignoring "noauto" option for root device
	[  +0.246052] systemd-fstab-generator[1348]: Ignoring "noauto" option for root device
	[  +8.663048] systemd-fstab-generator[1449]: Ignoring "noauto" option for root device
	[  +0.103326] kauditd_printk_skb: 206 callbacks suppressed
	[  +5.045655] kauditd_printk_skb: 24 callbacks suppressed
	[  +0.759487] systemd-fstab-generator[1705]: Ignoring "noauto" option for root device
	[  +6.800944] systemd-fstab-generator[1860]: Ignoring "noauto" option for root device
	[  +0.086630] kauditd_printk_skb: 40 callbacks suppressed
	[  +8.016757] systemd-fstab-generator[2285]: Ignoring "noauto" option for root device
	[  +0.140038] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.396453] systemd-fstab-generator[2387]: Ignoring "noauto" option for root device
	[  +0.210902] kauditd_printk_skb: 12 callbacks suppressed
	[Apr 8 23:08] kauditd_printk_skb: 71 callbacks suppressed
	[Apr 8 23:09] systemd-fstab-generator[3506]: Ignoring "noauto" option for root device
	[  +0.614168] systemd-fstab-generator[3549]: Ignoring "noauto" option for root device
	[  +0.260567] systemd-fstab-generator[3561]: Ignoring "noauto" option for root device
	[  +0.277633] systemd-fstab-generator[3575]: Ignoring "noauto" option for root device
	[  +5.335755] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 23:31:44 up 25 min,  0 users,  load average: 0.00, 0.02, 0.01
	Linux functional-618200 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 08 23:31:43 functional-618200 kubelet[2292]: I0408 23:31:43.969812    2292 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:31:43 functional-618200 kubelet[2292]: E0408 23:31:43.969763    2292 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 08 23:31:43 functional-618200 kubelet[2292]: E0408 23:31:43.969920    2292 kuberuntime_sandbox.go:305] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:31:43 functional-618200 kubelet[2292]: E0408 23:31:43.969978    2292 generic.go:256] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:31:43 functional-618200 kubelet[2292]: E0408 23:31:43.969868    2292 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 08 23:31:43 functional-618200 kubelet[2292]: E0408 23:31:43.970159    2292 kuberuntime_container.go:508] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:31:43 functional-618200 kubelet[2292]: E0408 23:31:43.970241    2292 log.go:32] "Version from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Apr 08 23:31:43 functional-618200 kubelet[2292]: E0408 23:31:43.969488    2292 log.go:32] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:31:43 functional-618200 kubelet[2292]: I0408 23:31:43.970662    2292 setters.go:602] "Node became not ready" node="functional-618200" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-04-08T23:31:43Z","lastTransitionTime":"2025-04-08T23:31:43Z","reason":"KubeletNotReady","message":"[container runtime is down, PLEG is not healthy: pleg was last seen active 22m16.960615768s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @-\u003e/var/run/docker.sock: read: connection reset by peer]"}
	Apr 08 23:31:43 functional-618200 kubelet[2292]: E0408 23:31:43.970699    2292 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:31:43 functional-618200 kubelet[2292]: E0408 23:31:43.970211    2292 kubelet.go:3018] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:31:43 functional-618200 kubelet[2292]: E0408 23:31:43.971706    2292 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 08 23:31:43 functional-618200 kubelet[2292]: E0408 23:31:43.971793    2292 kuberuntime_container.go:508] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Apr 08 23:31:43 functional-618200 kubelet[2292]: E0408 23:31:43.972866    2292 kubelet.go:1529] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Apr 08 23:31:43 functional-618200 kubelet[2292]: E0408 23:31:43.973030    2292 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 08 23:31:43 functional-618200 kubelet[2292]: E0408 23:31:43.973171    2292 container_log_manager.go:197] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:31:43 functional-618200 kubelet[2292]: E0408 23:31:43.975933    2292 kubelet_node_status.go:549] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-04-08T23:31:43Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-04-08T23:31:43Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-04-08T23:31:43Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-04-08T23:31:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-04-08T23:31:43Z\\\",\\\"message\\\":\\\"[container runtime is down, PLEG is not healthy: pleg was last seen active 22m16.960615768s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to
get docker version: failed to get docker version from dockerd: error during connect: Get \\\\\\\"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\\\\\\\": read unix @-\\\\u003e/var/run/docker.sock: read: connection reset by peer]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"containerRuntimeVersion\\\":\\\"docker://Unknown\\\"}}}\" for node \"functional-618200\": Patch \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-618200/status?timeout=10s\": dial tcp 192.168.113.37:8441: connect: connection refused"
	Apr 08 23:31:43 functional-618200 kubelet[2292]: E0408 23:31:43.978376    2292 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"functional-618200\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-618200?timeout=10s\": dial tcp 192.168.113.37:8441: connect: connection refused"
	Apr 08 23:31:43 functional-618200 kubelet[2292]: I0408 23:31:43.980594    2292 status_manager.go:890] "Failed to get status for pod" podUID="9fb511c70f1101c6e5f88375ee4557ca" pod="kube-system/etcd-functional-618200" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-618200\": dial tcp 192.168.113.37:8441: connect: connection refused"
	Apr 08 23:31:43 functional-618200 kubelet[2292]: E0408 23:31:43.982125    2292 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"functional-618200\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-618200?timeout=10s\": dial tcp 192.168.113.37:8441: connect: connection refused"
	Apr 08 23:31:43 functional-618200 kubelet[2292]: I0408 23:31:43.982031    2292 status_manager.go:890] "Failed to get status for pod" podUID="195f529b1fbee47263ef9fc136a700cc" pod="kube-system/kube-apiserver-functional-618200" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-618200\": dial tcp 192.168.113.37:8441: connect: connection refused"
	Apr 08 23:31:43 functional-618200 kubelet[2292]: I0408 23:31:43.983306    2292 status_manager.go:890] "Failed to get status for pod" podUID="2d86200df590720b9ed4835cb131ef10" pod="kube-system/kube-scheduler-functional-618200" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-618200\": dial tcp 192.168.113.37:8441: connect: connection refused"
	Apr 08 23:31:43 functional-618200 kubelet[2292]: E0408 23:31:43.984185    2292 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"functional-618200\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-618200?timeout=10s\": dial tcp 192.168.113.37:8441: connect: connection refused"
	Apr 08 23:31:43 functional-618200 kubelet[2292]: E0408 23:31:43.985199    2292 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"functional-618200\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-618200?timeout=10s\": dial tcp 192.168.113.37:8441: connect: connection refused"
	Apr 08 23:31:43 functional-618200 kubelet[2292]: E0408 23:31:43.985277    2292 kubelet_node_status.go:536] "Unable to update node status" err="update node status exceeds retry count"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0408 23:29:43.457220   12444 logs.go:279] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:29:43.487846   12444 logs.go:279] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:29:43.520916   12444 logs.go:279] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:29:43.548847   12444 logs.go:279] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:29:43.583224   12444 logs.go:279] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:29:43.613390   12444 logs.go:279] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:29:43.645201   12444 logs.go:279] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:30:43.735061   12444 logs.go:279] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-618200 -n functional-618200
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-618200 -n functional-618200: exit status 2 (11.7156431s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-618200" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (180.47s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (180.7s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out\kubectl.exe --context functional-618200 get pods
functional_test.go:758: (dbg) Non-zero exit: out\kubectl.exe --context functional-618200 get pods: exit status 1 (10.6411726s)

                                                
                                                
** stderr ** 
	E0408 23:31:58.757956    9180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.113.37:8441/api?timeout=32s\": dial tcp 192.168.113.37:8441: connectex: No connection could be made because the target machine actively refused it."
	E0408 23:32:00.871891    9180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.113.37:8441/api?timeout=32s\": dial tcp 192.168.113.37:8441: connectex: No connection could be made because the target machine actively refused it."
	E0408 23:32:02.893619    9180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.113.37:8441/api?timeout=32s\": dial tcp 192.168.113.37:8441: connectex: No connection could be made because the target machine actively refused it."
	E0408 23:32:04.924683    9180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.113.37:8441/api?timeout=32s\": dial tcp 192.168.113.37:8441: connectex: No connection could be made because the target machine actively refused it."
	E0408 23:32:06.959370    9180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.113.37:8441/api?timeout=32s\": dial tcp 192.168.113.37:8441: connectex: No connection could be made because the target machine actively refused it."
	Unable to connect to the server: dial tcp 192.168.113.37:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:761: failed to run kubectl directly. args "out\\kubectl.exe --context functional-618200 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-618200 -n functional-618200
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-618200 -n functional-618200: exit status 2 (11.7517803s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-618200 logs -n 25
E0408 23:33:10.460901    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-618200 logs -n 25: (2m26.1476683s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-268300 --log_dir                                     | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:02 UTC | 08 Apr 25 23:03 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-268300 --log_dir                                     | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:03 UTC | 08 Apr 25 23:03 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-268300 --log_dir                                     | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:03 UTC | 08 Apr 25 23:03 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-268300 --log_dir                                     | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:03 UTC | 08 Apr 25 23:03 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-268300 --log_dir                                     | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:03 UTC | 08 Apr 25 23:04 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-268300 --log_dir                                     | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:04 UTC | 08 Apr 25 23:04 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-268300 --log_dir                                     | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:04 UTC | 08 Apr 25 23:04 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-268300                                            | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:04 UTC | 08 Apr 25 23:04 UTC |
	| start   | -p functional-618200                                        | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:04 UTC | 08 Apr 25 23:08 UTC |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |         |                     |                     |
	| start   | -p functional-618200                                        | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:08 UTC |                     |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-618200 cache add                                 | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:15 UTC | 08 Apr 25 23:17 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-618200 cache add                                 | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:17 UTC | 08 Apr 25 23:19 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-618200 cache add                                 | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:19 UTC | 08 Apr 25 23:21 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-618200 cache add                                 | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:21 UTC | 08 Apr 25 23:22 UTC |
	|         | minikube-local-cache-test:functional-618200                 |                   |                   |         |                     |                     |
	| cache   | functional-618200 cache delete                              | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:22 UTC | 08 Apr 25 23:22 UTC |
	|         | minikube-local-cache-test:functional-618200                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:22 UTC | 08 Apr 25 23:22 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:22 UTC | 08 Apr 25 23:22 UTC |
	| ssh     | functional-618200 ssh sudo                                  | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:22 UTC |                     |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-618200                                           | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:22 UTC |                     |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-618200 ssh                                       | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:23 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-618200 cache reload                              | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:23 UTC | 08 Apr 25 23:25 UTC |
	| ssh     | functional-618200 ssh                                       | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:25 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:25 UTC | 08 Apr 25 23:25 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:25 UTC | 08 Apr 25 23:25 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-618200 kubectl --                                | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:28 UTC |                     |
	|         | --context functional-618200                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/08 23:08:09
	Running on machine: minikube6
	Binary: Built with gc go1.24.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 23:08:09.246712   12728 out.go:345] Setting OutFile to fd 812 ...
	I0408 23:08:09.325819   12728 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 23:08:09.325819   12728 out.go:358] Setting ErrFile to fd 1352...
	I0408 23:08:09.325819   12728 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 23:08:09.346759   12728 out.go:352] Setting JSON to false
	I0408 23:08:09.349936   12728 start.go:129] hostinfo: {"hostname":"minikube6","uptime":10687,"bootTime":1744143002,"procs":176,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5679 Build 19045.5679","kernelVersion":"10.0.19045.5679 Build 19045.5679","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0408 23:08:09.349936   12728 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 23:08:09.354680   12728 out.go:177] * [functional-618200] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
	I0408 23:08:09.360335   12728 notify.go:220] Checking for updates...
	I0408 23:08:09.363251   12728 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0408 23:08:09.365934   12728 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 23:08:09.370015   12728 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0408 23:08:09.372261   12728 out.go:177]   - MINIKUBE_LOCATION=20501
	I0408 23:08:09.376217   12728 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 23:08:09.380199   12728 config.go:182] Loaded profile config "functional-618200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0408 23:08:09.380595   12728 driver.go:404] Setting default libvirt URI to qemu:///system
	I0408 23:08:14.781214   12728 out.go:177] * Using the hyperv driver based on existing profile
	I0408 23:08:14.787195   12728 start.go:297] selected driver: hyperv
	I0408 23:08:14.787195   12728 start.go:901] validating driver "hyperv" against &{Name:functional-618200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 Clust
erName:functional-618200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.113.37 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 23:08:14.788108   12728 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 23:08:14.840719   12728 cni.go:84] Creating CNI manager for ""
	I0408 23:08:14.840719   12728 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 23:08:14.840719   12728 start.go:340] cluster config:
	{Name:functional-618200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-618200 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.113.37 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 23:08:14.840719   12728 iso.go:125] acquiring lock: {Name:mk49322cc4182124f5e9cd1631076166a7ff463d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 23:08:14.844868   12728 out.go:177] * Starting "functional-618200" primary control-plane node in "functional-618200" cluster
	I0408 23:08:14.847279   12728 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0408 23:08:14.847279   12728 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
	I0408 23:08:14.847279   12728 cache.go:56] Caching tarball of preloaded images
	I0408 23:08:14.847279   12728 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0408 23:08:14.847279   12728 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0408 23:08:14.848442   12728 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-618200\config.json ...
	I0408 23:08:14.850635   12728 start.go:360] acquireMachinesLock for functional-618200: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 23:08:14.850635   12728 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-618200"
	I0408 23:08:14.851114   12728 start.go:96] Skipping create...Using existing machine configuration
	I0408 23:08:14.851183   12728 fix.go:54] fixHost starting: 
	I0408 23:08:14.851361   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:17.635558   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:17.636077   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:17.636077   12728 fix.go:112] recreateIfNeeded on functional-618200: state=Running err=<nil>
	W0408 23:08:17.636077   12728 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 23:08:17.641199   12728 out.go:177] * Updating the running hyperv "functional-618200" VM ...
	I0408 23:08:17.643270   12728 machine.go:93] provisionDockerMachine start ...
	I0408 23:08:17.643828   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:19.832353   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:19.832353   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:19.833486   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:08:22.348787   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:08:22.348787   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:22.354331   12728 main.go:141] libmachine: Using SSH client type: native
	I0408 23:08:22.354942   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1427d00] 0x142a840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:08:22.354942   12728 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 23:08:22.482052   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-618200
	
	I0408 23:08:22.482109   12728 buildroot.go:166] provisioning hostname "functional-618200"
	I0408 23:08:22.482218   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:24.614743   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:24.615199   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:24.615199   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:08:27.116022   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:08:27.116669   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:27.122660   12728 main.go:141] libmachine: Using SSH client type: native
	I0408 23:08:27.122837   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1427d00] 0x142a840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:08:27.122837   12728 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-618200 && echo "functional-618200" | sudo tee /etc/hostname
	I0408 23:08:27.296048   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-618200
	
	I0408 23:08:27.296048   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:29.515938   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:29.516732   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:29.516860   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:08:32.104430   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:08:32.104430   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:32.111087   12728 main.go:141] libmachine: Using SSH client type: native
	I0408 23:08:32.111822   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1427d00] 0x142a840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:08:32.111822   12728 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-618200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-618200/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-618200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 23:08:32.239307   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 23:08:32.239307   12728 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0408 23:08:32.239307   12728 buildroot.go:174] setting up certificates
	I0408 23:08:32.239307   12728 provision.go:84] configureAuth start
	I0408 23:08:32.239907   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:34.375660   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:34.376637   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:34.376637   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:08:36.940152   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:08:36.940811   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:36.940910   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:39.102003   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:39.102003   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:39.102003   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:08:41.651752   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:08:41.651752   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:41.651752   12728 provision.go:143] copyHostCerts
	I0408 23:08:41.652744   12728 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0408 23:08:41.653241   12728 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0408 23:08:41.653241   12728 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0408 23:08:41.653897   12728 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0408 23:08:41.655530   12728 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0408 23:08:41.655919   12728 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0408 23:08:41.655919   12728 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0408 23:08:41.656607   12728 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0408 23:08:41.657919   12728 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0408 23:08:41.658240   12728 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0408 23:08:41.658370   12728 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0408 23:08:41.658791   12728 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0408 23:08:41.659993   12728 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-618200 san=[127.0.0.1 192.168.113.37 functional-618200 localhost minikube]
	I0408 23:08:41.724180   12728 provision.go:177] copyRemoteCerts
	I0408 23:08:41.734528   12728 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 23:08:41.734661   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:43.857555   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:43.858453   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:43.858453   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:08:46.376433   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:08:46.376433   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:46.376862   12728 sshutil.go:53] new ssh client: &{IP:192.168.113.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-618200\id_rsa Username:docker}
	I0408 23:08:46.479933   12728 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7452489s)
	I0408 23:08:46.479933   12728 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0408 23:08:46.480251   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0408 23:08:46.526275   12728 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0408 23:08:46.526275   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0408 23:08:46.571513   12728 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0408 23:08:46.571513   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 23:08:46.618636   12728 provision.go:87] duration metric: took 14.3791442s to configureAuth
	I0408 23:08:46.618636   12728 buildroot.go:189] setting minikube options for container-runtime
	I0408 23:08:46.619360   12728 config.go:182] Loaded profile config "functional-618200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0408 23:08:46.619360   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:48.759145   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:48.759997   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:48.760072   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:08:51.352431   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:08:51.352840   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:51.358422   12728 main.go:141] libmachine: Using SSH client type: native
	I0408 23:08:51.359181   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1427d00] 0x142a840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:08:51.359181   12728 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0408 23:08:51.498239   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0408 23:08:51.498239   12728 buildroot.go:70] root file system type: tmpfs
	I0408 23:08:51.499500   12728 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0408 23:08:51.499565   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:53.639609   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:53.639609   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:53.639706   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:08:56.165286   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:08:56.165286   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:56.172269   12728 main.go:141] libmachine: Using SSH client type: native
	I0408 23:08:56.172483   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1427d00] 0x142a840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:08:56.172483   12728 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0408 23:08:56.329047   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0408 23:08:56.329209   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:08:58.408221   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:08:58.408271   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:08:58.408271   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:09:00.972449   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:09:00.972449   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:00.978298   12728 main.go:141] libmachine: Using SSH client type: native
	I0408 23:09:00.979066   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1427d00] 0x142a840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:09:00.979150   12728 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0408 23:09:01.120743   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 23:09:01.120743   12728 machine.go:96] duration metric: took 43.4763536s to provisionDockerMachine
	I0408 23:09:01.120743   12728 start.go:293] postStartSetup for "functional-618200" (driver="hyperv")
	I0408 23:09:01.120743   12728 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 23:09:01.134465   12728 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 23:09:01.134586   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:09:03.239597   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:09:03.239597   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:03.240300   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:09:05.769173   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:09:05.769791   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:05.769977   12728 sshutil.go:53] new ssh client: &{IP:192.168.113.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-618200\id_rsa Username:docker}
	I0408 23:09:05.882717   12728 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7480703s)
	I0408 23:09:05.895357   12728 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 23:09:05.906701   12728 command_runner.go:130] > NAME=Buildroot
	I0408 23:09:05.906871   12728 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0408 23:09:05.906871   12728 command_runner.go:130] > ID=buildroot
	I0408 23:09:05.906871   12728 command_runner.go:130] > VERSION_ID=2023.02.9
	I0408 23:09:05.906871   12728 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0408 23:09:05.906871   12728 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 23:09:05.906871   12728 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0408 23:09:05.907746   12728 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0408 23:09:05.909230   12728 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> 98642.pem in /etc/ssl/certs
	I0408 23:09:05.909297   12728 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> /etc/ssl/certs/98642.pem
	I0408 23:09:05.909974   12728 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9864\hosts -> hosts in /etc/test/nested/copy/9864
	I0408 23:09:05.909974   12728 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9864\hosts -> /etc/test/nested/copy/9864/hosts
	I0408 23:09:05.922022   12728 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/9864
	I0408 23:09:05.940207   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem --> /etc/ssl/certs/98642.pem (1708 bytes)
	I0408 23:09:05.986656   12728 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9864\hosts --> /etc/test/nested/copy/9864/hosts (40 bytes)
	I0408 23:09:06.037448   12728 start.go:296] duration metric: took 4.9164478s for postStartSetup
	I0408 23:09:06.037545   12728 fix.go:56] duration metric: took 51.1857011s for fixHost
	I0408 23:09:06.037624   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:09:08.158094   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:09:08.158094   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:08.158094   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:09:10.681527   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:09:10.681527   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:10.688411   12728 main.go:141] libmachine: Using SSH client type: native
	I0408 23:09:10.689102   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1427d00] 0x142a840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:09:10.689245   12728 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0408 23:09:10.829582   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744153750.860325411
	
	I0408 23:09:10.829582   12728 fix.go:216] guest clock: 1744153750.860325411
	I0408 23:09:10.829683   12728 fix.go:229] Guest: 2025-04-08 23:09:10.860325411 +0000 UTC Remote: 2025-04-08 23:09:06.0375451 +0000 UTC m=+56.890513901 (delta=4.822780311s)
	I0408 23:09:10.829858   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:09:12.957017   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:09:12.957017   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:12.957017   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:09:15.521412   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:09:15.521412   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:15.527916   12728 main.go:141] libmachine: Using SSH client type: native
	I0408 23:09:15.528634   12728 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1427d00] 0x142a840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:09:15.528634   12728 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1744153750
	I0408 23:09:15.671072   12728 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr  8 23:09:10 UTC 2025
	
	I0408 23:09:15.671072   12728 fix.go:236] clock set: Tue Apr  8 23:09:10 UTC 2025
	 (err=<nil>)
	I0408 23:09:15.671072   12728 start.go:83] releasing machines lock for "functional-618200", held for 1m0.8196519s
	I0408 23:09:15.671072   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:09:17.795924   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:09:17.795924   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:17.795924   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:09:20.343976   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:09:20.344152   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:20.347691   12728 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0408 23:09:20.347691   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:09:20.358515   12728 ssh_runner.go:195] Run: cat /version.json
	I0408 23:09:20.358515   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:09:22.544260   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:09:22.544260   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:22.544260   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:09:22.547450   12728 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:09:22.547450   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:22.547565   12728 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:09:25.306292   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:09:25.306292   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:25.306292   12728 sshutil.go:53] new ssh client: &{IP:192.168.113.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-618200\id_rsa Username:docker}
	I0408 23:09:25.329784   12728 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:09:25.330858   12728 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:09:25.330972   12728 sshutil.go:53] new ssh client: &{IP:192.168.113.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-618200\id_rsa Username:docker}
	I0408 23:09:25.407167   12728 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0408 23:09:25.407167   12728 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.0594111s)
	W0408 23:09:25.407380   12728 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0408 23:09:25.427823   12728 command_runner.go:130] > {"iso_version": "v1.35.0", "kicbase_version": "v0.0.45-1736763277-20236", "minikube_version": "v1.35.0", "commit": "3fb24bd87c8c8761e2515e1a9ee13835a389ed68"}
	I0408 23:09:25.427823   12728 ssh_runner.go:235] Completed: cat /version.json: (5.0692422s)
	I0408 23:09:25.441651   12728 ssh_runner.go:195] Run: systemctl --version
	I0408 23:09:25.452009   12728 command_runner.go:130] > systemd 252 (252)
	I0408 23:09:25.452009   12728 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0408 23:09:25.462226   12728 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0408 23:09:25.470182   12728 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0408 23:09:25.470647   12728 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 23:09:25.483329   12728 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 23:09:25.504611   12728 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0408 23:09:25.504611   12728 start.go:495] detecting cgroup driver to use...
	I0408 23:09:25.505055   12728 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0408 23:09:25.518103   12728 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0408 23:09:25.518165   12728 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0408 23:09:25.545691   12728 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0408 23:09:25.557677   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0408 23:09:25.585837   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0408 23:09:25.605727   12728 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0408 23:09:25.616269   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0408 23:09:25.648654   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 23:09:25.682043   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0408 23:09:25.712502   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 23:09:25.745703   12728 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 23:09:25.776089   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0408 23:09:25.813738   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0408 23:09:25.847440   12728 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0408 23:09:25.878964   12728 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 23:09:25.897917   12728 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0408 23:09:25.910039   12728 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 23:09:25.937635   12728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:09:26.191579   12728 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0408 23:09:26.223263   12728 start.go:495] detecting cgroup driver to use...
	I0408 23:09:26.235750   12728 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0408 23:09:26.260048   12728 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0408 23:09:26.260125   12728 command_runner.go:130] > [Unit]
	I0408 23:09:26.260125   12728 command_runner.go:130] > Description=Docker Application Container Engine
	I0408 23:09:26.260125   12728 command_runner.go:130] > Documentation=https://docs.docker.com
	I0408 23:09:26.260200   12728 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0408 23:09:26.260200   12728 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0408 23:09:26.260200   12728 command_runner.go:130] > StartLimitBurst=3
	I0408 23:09:26.260200   12728 command_runner.go:130] > StartLimitIntervalSec=60
	I0408 23:09:26.260200   12728 command_runner.go:130] > [Service]
	I0408 23:09:26.260200   12728 command_runner.go:130] > Type=notify
	I0408 23:09:26.260200   12728 command_runner.go:130] > Restart=on-failure
	I0408 23:09:26.260338   12728 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0408 23:09:26.260338   12728 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0408 23:09:26.260338   12728 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0408 23:09:26.260338   12728 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0408 23:09:26.260338   12728 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0408 23:09:26.260472   12728 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0408 23:09:26.260472   12728 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0408 23:09:26.260472   12728 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0408 23:09:26.260472   12728 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0408 23:09:26.260472   12728 command_runner.go:130] > ExecStart=
	I0408 23:09:26.260472   12728 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0408 23:09:26.260581   12728 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0408 23:09:26.260581   12728 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0408 23:09:26.260581   12728 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0408 23:09:26.260581   12728 command_runner.go:130] > LimitNOFILE=infinity
	I0408 23:09:26.260678   12728 command_runner.go:130] > LimitNPROC=infinity
	I0408 23:09:26.260707   12728 command_runner.go:130] > LimitCORE=infinity
	I0408 23:09:26.260707   12728 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0408 23:09:26.260707   12728 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0408 23:09:26.260764   12728 command_runner.go:130] > TasksMax=infinity
	I0408 23:09:26.260764   12728 command_runner.go:130] > TimeoutStartSec=0
	I0408 23:09:26.260764   12728 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0408 23:09:26.260764   12728 command_runner.go:130] > Delegate=yes
	I0408 23:09:26.260802   12728 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0408 23:09:26.260802   12728 command_runner.go:130] > KillMode=process
	I0408 23:09:26.260847   12728 command_runner.go:130] > [Install]
	I0408 23:09:26.260847   12728 command_runner.go:130] > WantedBy=multi-user.target
	I0408 23:09:26.272013   12728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 23:09:26.309047   12728 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 23:09:26.364238   12728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 23:09:26.397809   12728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0408 23:09:26.420470   12728 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 23:09:26.452776   12728 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0408 23:09:26.465171   12728 ssh_runner.go:195] Run: which cri-dockerd
	I0408 23:09:26.471612   12728 command_runner.go:130] > /usr/bin/cri-dockerd
	I0408 23:09:26.483601   12728 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0408 23:09:26.500243   12728 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0408 23:09:26.541951   12728 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0408 23:09:26.818543   12728 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0408 23:09:27.059393   12728 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0408 23:09:27.059393   12728 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0408 23:09:27.105693   12728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:09:27.332438   12728 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0408 23:10:38.780025   12728 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0408 23:10:38.780100   12728 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0408 23:10:38.783775   12728 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.4502693s)
	I0408 23:10:38.797107   12728 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0408 23:10:38.826638   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	I0408 23:10:38.826758   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.094333857Z" level=info msg="Starting up"
	I0408 23:10:38.826758   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.095749501Z" level=info msg="containerd not running, starting managed containerd"
	I0408 23:10:38.826815   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.097506580Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=673
	I0408 23:10:38.826815   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.128963677Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0408 23:10:38.826815   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152469766Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0408 23:10:38.826815   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152558876Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0408 23:10:38.826815   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152717392Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0408 23:10:38.826815   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152739794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.827006   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152812201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.827006   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152901110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.827074   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153079328Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.827097   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153169038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.827157   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153187739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.827181   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153197940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.827181   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153293950Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.827260   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153812303Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.827260   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156561482Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.827340   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156716198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156848512Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156952822Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.157044531Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.157169744Z" level=info msg="metadata content store policy set" policy=shared
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190389421Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190521734Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190544737Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190560338Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190576740Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190838067Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191154799Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191361820Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191472031Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191493633Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191512135Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191527737Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191541238Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191555639Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191571341Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.827458   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191603144Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.827985   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191615846Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.828081   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191628447Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.828188   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191749659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828234   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191774162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828234   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191800364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828234   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191815666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828308   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191830867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828308   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191844669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828356   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191857670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828356   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191870171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828426   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191882273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828426   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191897274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828489   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191908775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828524   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191920677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828613   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191932778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828649   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191947379Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0408 23:10:38.828649   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191967081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828790   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191979383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828855   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191992484Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0408 23:10:38.828855   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192114796Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0408 23:10:38.828935   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192196605Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192262611Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192291214Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192304416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192318917Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192331918Z" level=info msg="NRI interface is disabled by configuration."
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193151202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193285015Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193371424Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193820570Z" level=info msg="containerd successfully booted in 0.066941s"
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.170474987Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.203429127Z" level=info msg="Loading containers: start."
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.350665658Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.583414712Z" level=info msg="Loading containers: done."
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.608611503Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.608776419Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.609056647Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.609260067Z" level=info msg="Daemon has completed initialization"
	I0408 23:10:38.828963   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.713909013Z" level=info msg="API listen on /var/run/docker.sock"
	I0408 23:10:38.829565   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.714066029Z" level=info msg="API listen on [::]:2376"
	I0408 23:10:38.829565   12728 command_runner.go:130] > Apr 08 23:06:50 functional-618200 systemd[1]: Started Docker Application Container Engine.
	I0408 23:10:38.829625   12728 command_runner.go:130] > Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.811241096Z" level=info msg="Processing signal 'terminated'"
	I0408 23:10:38.829625   12728 command_runner.go:130] > Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813084503Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0408 23:10:38.829625   12728 command_runner.go:130] > Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813257403Z" level=info msg="Daemon shutdown complete"
	I0408 23:10:38.829625   12728 command_runner.go:130] > Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813288003Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0408 23:10:38.829753   12728 command_runner.go:130] > Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813374004Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0408 23:10:38.829753   12728 command_runner.go:130] > Apr 08 23:07:20 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	I0408 23:10:38.829790   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	I0408 23:10:38.829942   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	I0408 23:10:38.829942   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	I0408 23:10:38.829942   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.861204748Z" level=info msg="Starting up"
	I0408 23:10:38.830042   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.863521556Z" level=info msg="containerd not running, starting managed containerd"
	I0408 23:10:38.830042   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.864856161Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1097
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.891008554Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913514335Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913559535Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913591835Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913605435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913626835Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913637435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913748735Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913963436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913985636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913996836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.914019636Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.914159537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.916995847Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917087147Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917210048Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.830207   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917295148Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0408 23:10:38.830797   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917328148Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0408 23:10:38.830797   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917346448Z" level=info msg="metadata content store policy set" policy=shared
	I0408 23:10:38.830797   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917634649Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0408 23:10:38.830869   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917741950Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0408 23:10:38.830869   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917760750Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0408 23:10:38.830869   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917900050Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0408 23:10:38.830869   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917914850Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0408 23:10:38.830869   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917957150Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918196151Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918327452Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918413452Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918430852Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918442352Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918453152Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918462452Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918473352Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918484552Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.830965   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918499152Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.831194   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918509952Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.831194   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918520052Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.831194   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918543853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831194   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918558553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831300   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918568953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831300   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918579553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831300   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918589553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831377   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918609253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831377   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918626253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831422   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918638253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831442   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918657853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831442   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918673253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831442   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918682953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918692253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918702953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918715553Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918733953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918744753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918754653Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918959554Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919161355Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919325455Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919361655Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919372055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919407356Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919416356Z" level=info msg="NRI interface is disabled by configuration."
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919735157Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919968758Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.920117658Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.920171758Z" level=info msg="containerd successfully booted in 0.029982s"
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:22 functional-618200 dockerd[1091]: time="2025-04-08T23:07:22.908709690Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:22 functional-618200 dockerd[1091]: time="2025-04-08T23:07:22.934950284Z" level=info msg="Loading containers: start."
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.062615440Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.175164242Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.282062124Z" level=info msg="Loading containers: done."
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.305666909Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.305777709Z" level=info msg="Daemon has completed initialization"
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.341856738Z" level=info msg="API listen on /var/run/docker.sock"
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:23 functional-618200 systemd[1]: Started Docker Application Container Engine.
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.343491744Z" level=info msg="API listen on [::]:2376"
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:32 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.905143108Z" level=info msg="Processing signal 'terminated'"
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906371813Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906906114Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.907033815Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0408 23:10:38.831493   12728 command_runner.go:130] > Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906918515Z" level=info msg="Daemon shutdown complete"
	I0408 23:10:38.832201   12728 command_runner.go:130] > Apr 08 23:07:33 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	I0408 23:10:38.832201   12728 command_runner.go:130] > Apr 08 23:07:33 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	I0408 23:10:38.832201   12728 command_runner.go:130] > Apr 08 23:07:33 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	I0408 23:10:38.832201   12728 command_runner.go:130] > Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.955484761Z" level=info msg="Starting up"
	I0408 23:10:38.832201   12728 command_runner.go:130] > Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.957042767Z" level=info msg="containerd not running, starting managed containerd"
	I0408 23:10:38.832402   12728 command_runner.go:130] > Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.958462672Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1462
	I0408 23:10:38.832402   12728 command_runner.go:130] > Apr 08 23:07:33 functional-618200 dockerd[1462]: time="2025-04-08T23:07:33.983507761Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0408 23:10:38.832440   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009132353Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0408 23:10:38.832440   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009242353Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0408 23:10:38.832490   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009307753Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0408 23:10:38.832524   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009324953Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.832524   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009354454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.832569   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009383954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.832619   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009545254Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.832619   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009658655Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.832619   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009680555Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.832619   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009691855Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.832619   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009717555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.832745   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.010024356Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.832794   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012580665Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.832826   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012671765Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0408 23:10:38.832878   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012945166Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0408 23:10:38.832917   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013039867Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0408 23:10:38.832917   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013070567Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0408 23:10:38.832917   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013104967Z" level=info msg="metadata content store policy set" policy=shared
	I0408 23:10:38.832975   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013460968Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0408 23:10:38.832996   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013562869Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013583269Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013598369Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013611569Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013659269Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014010570Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014156471Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014247371Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014266571Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014280071Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014397172Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014425272Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014441672Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014458272Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014472772Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014498972Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014515572Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014537972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014555672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833039   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014570972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833567   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014585972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833567   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014601072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014615672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014629372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014643572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014658573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014679173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014709673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014738473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014783273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014916873Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014942274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014955574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014969174Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015051774Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015092874Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015107074Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015122374Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015133174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015147174Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015158874Z" level=info msg="NRI interface is disabled by configuration."
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015573476Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015638476Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015690176Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015715476Z" level=info msg="containerd successfully booted in 0.033079s"
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:35 functional-618200 dockerd[1456]: time="2025-04-08T23:07:35.262471031Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0408 23:10:38.833622   12728 command_runner.go:130] > Apr 08 23:07:37 functional-618200 dockerd[1456]: time="2025-04-08T23:07:37.762713164Z" level=info msg="Loading containers: start."
	I0408 23:10:38.834237   12728 command_runner.go:130] > Apr 08 23:07:37 functional-618200 dockerd[1456]: time="2025-04-08T23:07:37.897446846Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0408 23:10:38.834237   12728 command_runner.go:130] > Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.015338367Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0408 23:10:38.834237   12728 command_runner.go:130] > Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.153824862Z" level=info msg="Loading containers: done."
	I0408 23:10:38.834237   12728 command_runner.go:130] > Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.182692065Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0408 23:10:38.834237   12728 command_runner.go:130] > Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.182937366Z" level=info msg="Daemon has completed initialization"
	I0408 23:10:38.834237   12728 command_runner.go:130] > Apr 08 23:07:38 functional-618200 systemd[1]: Started Docker Application Container Engine.
	I0408 23:10:38.834237   12728 command_runner.go:130] > Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.220981402Z" level=info msg="API listen on /var/run/docker.sock"
	I0408 23:10:38.834375   12728 command_runner.go:130] > Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.221045402Z" level=info msg="API listen on [::]:2376"
	I0408 23:10:38.834375   12728 command_runner.go:130] > Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928174323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.834375   12728 command_runner.go:130] > Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928255628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.834443   12728 command_runner.go:130] > Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928274329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834443   12728 command_runner.go:130] > Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928976471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834533   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011163114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011256119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011273420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011437330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.047888267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048098278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048281989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048657110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089143872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089470391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089714404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.090374541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.331240402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.331940241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.332248459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.332901095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587350115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587733437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587951349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.588255466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643351545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643476652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.834557   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643513354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835183   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643620460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835183   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681369670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.835183   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681570881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.835294   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681658686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835294   12728 command_runner.go:130] > Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.682028307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835294   12728 command_runner.go:130] > Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.094044455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.835373   12728 command_runner.go:130] > Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.094486867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.835373   12728 command_runner.go:130] > Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.095561595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835373   12728 command_runner.go:130] > Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.097530446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835463   12728 command_runner.go:130] > Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394114311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.835463   12728 command_runner.go:130] > Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394433319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.835567   12728 command_runner.go:130] > Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394665025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835567   12728 command_runner.go:130] > Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.395349443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835567   12728 command_runner.go:130] > Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643182806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.835567   12728 command_runner.go:130] > Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643370211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.835723   12728 command_runner.go:130] > Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643392711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835723   12728 command_runner.go:130] > Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.645053352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835723   12728 command_runner.go:130] > Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216296816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.835827   12728 command_runner.go:130] > Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216387017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.835827   12728 command_runner.go:130] > Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216402117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835887   12728 command_runner.go:130] > Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216977424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.540620784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.540963288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.541044989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.541180590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.848480641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.850292361Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.850566464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.851150170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.385762643Z" level=info msg="Processing signal 'terminated'"
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574335274Z" level=info msg="shim disconnected" id=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574507675Z" level=warning msg="cleaning up after shim disconnected" id=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574520575Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.575374478Z" level=info msg="ignoring event" container=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.602965785Z" level=info msg="ignoring event" container=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.603895489Z" level=info msg="shim disconnected" id=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.604175090Z" level=warning msg="cleaning up after shim disconnected" id=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.604242890Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614380530Z" level=info msg="shim disconnected" id=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614605231Z" level=warning msg="cleaning up after shim disconnected" id=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614742231Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.620402053Z" level=info msg="ignoring event" container=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.620802455Z" level=info msg="shim disconnected" id=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.621015255Z" level=info msg="ignoring event" container=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.621947059Z" level=info msg="ignoring event" container=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.622304660Z" level=info msg="ignoring event" container=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622827062Z" level=warning msg="cleaning up after shim disconnected" id=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.623203064Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.835964   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622314560Z" level=info msg="shim disconnected" id=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c namespace=moby
	I0408 23:10:38.836542   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.624293868Z" level=warning msg="cleaning up after shim disconnected" id=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c namespace=moby
	I0408 23:10:38.836542   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.624306868Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.836542   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622381461Z" level=info msg="shim disconnected" id=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c namespace=moby
	I0408 23:10:38.836542   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.631193795Z" level=warning msg="cleaning up after shim disconnected" id=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.631249695Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.667400535Z" level=info msg="ignoring event" container=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.669623644Z" level=info msg="shim disconnected" id=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.672188454Z" level=warning msg="cleaning up after shim disconnected" id=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.672924657Z" level=info msg="ignoring event" container=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.673767960Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.681394990Z" level=info msg="ignoring event" container=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.681607190Z" level=info msg="ignoring event" container=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.681903492Z" level=info msg="shim disconnected" id=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.685272405Z" level=warning msg="cleaning up after shim disconnected" id=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.685407505Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.671723952Z" level=info msg="shim disconnected" id=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.693693137Z" level=warning msg="cleaning up after shim disconnected" id=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.693789338Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697563052Z" level=info msg="shim disconnected" id=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697641053Z" level=warning msg="cleaning up after shim disconnected" id=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697654453Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.725345060Z" level=info msg="ignoring event" container=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.836750   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.725697262Z" level=info msg="shim disconnected" id=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa namespace=moby
	I0408 23:10:38.837349   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.725980963Z" level=warning msg="cleaning up after shim disconnected" id=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa namespace=moby
	I0408 23:10:38.837349   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.726206964Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.837349   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.734018694Z" level=info msg="ignoring event" container=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.837349   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.736798905Z" level=info msg="shim disconnected" id=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 namespace=moby
	I0408 23:10:38.837581   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.737017505Z" level=warning msg="cleaning up after shim disconnected" id=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 namespace=moby
	I0408 23:10:38.837581   12728 command_runner.go:130] > Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.737255906Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.837581   12728 command_runner.go:130] > Apr 08 23:09:32 functional-618200 dockerd[1456]: time="2025-04-08T23:09:32.552363388Z" level=info msg="ignoring event" container=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.837653   12728 command_runner.go:130] > Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556138103Z" level=info msg="shim disconnected" id=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c namespace=moby
	I0408 23:10:38.837653   12728 command_runner.go:130] > Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556756905Z" level=warning msg="cleaning up after shim disconnected" id=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c namespace=moby
	I0408 23:10:38.837653   12728 command_runner.go:130] > Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556921006Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.837999   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.565876302Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.643029581Z" level=info msg="ignoring event" container=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.646699056Z" level=info msg="shim disconnected" id=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f namespace=moby
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.647140153Z" level=warning msg="cleaning up after shim disconnected" id=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f namespace=moby
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.647214253Z" level=info msg="cleaning up dead shim" namespace=moby
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724363532Z" level=info msg="Daemon shutdown complete"
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724563130Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724658330Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724794029Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:38 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:38 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:38 functional-618200 systemd[1]: docker.service: Consumed 4.925s CPU time.
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:38 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:09:38 functional-618200 dockerd[3978]: time="2025-04-08T23:09:38.782261701Z" level=info msg="Starting up"
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:10:38 functional-618200 dockerd[3978]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:10:38 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:10:38 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0408 23:10:38.838044   12728 command_runner.go:130] > Apr 08 23:10:38 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	I0408 23:10:38.863518   12728 out.go:201] 
	W0408 23:10:38.867350   12728 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 08 23:06:49 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.094333857Z" level=info msg="Starting up"
	Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.095749501Z" level=info msg="containerd not running, starting managed containerd"
	Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.097506580Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=673
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.128963677Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152469766Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152558876Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152717392Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152739794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152812201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152901110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153079328Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153169038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153187739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153197940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153293950Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153812303Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156561482Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156716198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156848512Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156952822Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.157044531Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.157169744Z" level=info msg="metadata content store policy set" policy=shared
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190389421Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190521734Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190544737Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190560338Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190576740Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190838067Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191154799Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191361820Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191472031Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191493633Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191512135Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191527737Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191541238Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191555639Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191571341Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191603144Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191615846Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191628447Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191749659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191774162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191800364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191815666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191830867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191844669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191857670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191870171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191882273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191897274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191908775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191920677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191932778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191947379Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191967081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191979383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191992484Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192114796Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192196605Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192262611Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192291214Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192304416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192318917Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192331918Z" level=info msg="NRI interface is disabled by configuration."
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193151202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193285015Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193371424Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193820570Z" level=info msg="containerd successfully booted in 0.066941s"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.170474987Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.203429127Z" level=info msg="Loading containers: start."
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.350665658Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.583414712Z" level=info msg="Loading containers: done."
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.608611503Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.608776419Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.609056647Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.609260067Z" level=info msg="Daemon has completed initialization"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.713909013Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.714066029Z" level=info msg="API listen on [::]:2376"
	Apr 08 23:06:50 functional-618200 systemd[1]: Started Docker Application Container Engine.
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.811241096Z" level=info msg="Processing signal 'terminated'"
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813084503Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813257403Z" level=info msg="Daemon shutdown complete"
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813288003Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813374004Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 08 23:07:20 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	Apr 08 23:07:21 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	Apr 08 23:07:21 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:07:21 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.861204748Z" level=info msg="Starting up"
	Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.863521556Z" level=info msg="containerd not running, starting managed containerd"
	Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.864856161Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1097
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.891008554Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913514335Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913559535Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913591835Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913605435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913626835Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913637435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913748735Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913963436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913985636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913996836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.914019636Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.914159537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.916995847Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917087147Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917210048Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917295148Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917328148Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917346448Z" level=info msg="metadata content store policy set" policy=shared
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917634649Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917741950Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917760750Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917900050Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917914850Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917957150Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918196151Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918327452Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918413452Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918430852Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918442352Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918453152Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918462452Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918473352Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918484552Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918499152Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918509952Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918520052Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918543853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918558553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918568953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918579553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918589553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918609253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918626253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918638253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918657853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918673253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918682953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918692253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918702953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918715553Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918733953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918744753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918754653Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918959554Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919161355Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919325455Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919361655Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919372055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919407356Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919416356Z" level=info msg="NRI interface is disabled by configuration."
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919735157Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919968758Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.920117658Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.920171758Z" level=info msg="containerd successfully booted in 0.029982s"
	Apr 08 23:07:22 functional-618200 dockerd[1091]: time="2025-04-08T23:07:22.908709690Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 08 23:07:22 functional-618200 dockerd[1091]: time="2025-04-08T23:07:22.934950284Z" level=info msg="Loading containers: start."
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.062615440Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.175164242Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.282062124Z" level=info msg="Loading containers: done."
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.305666909Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.305777709Z" level=info msg="Daemon has completed initialization"
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.341856738Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 08 23:07:23 functional-618200 systemd[1]: Started Docker Application Container Engine.
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.343491744Z" level=info msg="API listen on [::]:2376"
	Apr 08 23:07:32 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.905143108Z" level=info msg="Processing signal 'terminated'"
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906371813Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906906114Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.907033815Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906918515Z" level=info msg="Daemon shutdown complete"
	Apr 08 23:07:33 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	Apr 08 23:07:33 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:07:33 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.955484761Z" level=info msg="Starting up"
	Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.957042767Z" level=info msg="containerd not running, starting managed containerd"
	Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.958462672Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1462
	Apr 08 23:07:33 functional-618200 dockerd[1462]: time="2025-04-08T23:07:33.983507761Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009132353Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009242353Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009307753Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009324953Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009354454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009383954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009545254Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009658655Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009680555Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009691855Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009717555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.010024356Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012580665Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012671765Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012945166Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013039867Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013070567Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013104967Z" level=info msg="metadata content store policy set" policy=shared
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013460968Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013562869Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013583269Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013598369Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013611569Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013659269Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014010570Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014156471Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014247371Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014266571Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014280071Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014397172Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014425272Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014441672Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014458272Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014472772Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014498972Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014515572Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014537972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014555672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014570972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014585972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014601072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014615672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014629372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014643572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014658573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014679173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014709673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014738473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014783273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014916873Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014942274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014955574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014969174Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015051774Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015092874Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015107074Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015122374Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015133174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015147174Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015158874Z" level=info msg="NRI interface is disabled by configuration."
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015573476Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015638476Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015690176Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015715476Z" level=info msg="containerd successfully booted in 0.033079s"
	Apr 08 23:07:35 functional-618200 dockerd[1456]: time="2025-04-08T23:07:35.262471031Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 08 23:07:37 functional-618200 dockerd[1456]: time="2025-04-08T23:07:37.762713164Z" level=info msg="Loading containers: start."
	Apr 08 23:07:37 functional-618200 dockerd[1456]: time="2025-04-08T23:07:37.897446846Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.015338367Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.153824862Z" level=info msg="Loading containers: done."
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.182692065Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.182937366Z" level=info msg="Daemon has completed initialization"
	Apr 08 23:07:38 functional-618200 systemd[1]: Started Docker Application Container Engine.
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.220981402Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.221045402Z" level=info msg="API listen on [::]:2376"
	Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928174323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928255628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928274329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928976471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011163114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011256119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011273420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011437330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.047888267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048098278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048281989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048657110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089143872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089470391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089714404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.090374541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.331240402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.331940241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.332248459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.332901095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587350115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587733437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587951349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.588255466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643351545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643476652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643513354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643620460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681369670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681570881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681658686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.682028307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.094044455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.094486867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.095561595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.097530446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394114311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394433319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394665025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.395349443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643182806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643370211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643392711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.645053352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216296816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216387017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216402117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216977424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.540620784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.540963288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.541044989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.541180590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.848480641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.850292361Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.850566464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.851150170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.385762643Z" level=info msg="Processing signal 'terminated'"
	Apr 08 23:09:27 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574335274Z" level=info msg="shim disconnected" id=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574507675Z" level=warning msg="cleaning up after shim disconnected" id=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574520575Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.575374478Z" level=info msg="ignoring event" container=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.602965785Z" level=info msg="ignoring event" container=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.603895489Z" level=info msg="shim disconnected" id=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.604175090Z" level=warning msg="cleaning up after shim disconnected" id=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.604242890Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614380530Z" level=info msg="shim disconnected" id=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614605231Z" level=warning msg="cleaning up after shim disconnected" id=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614742231Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.620402053Z" level=info msg="ignoring event" container=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.620802455Z" level=info msg="shim disconnected" id=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.621015255Z" level=info msg="ignoring event" container=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.621947059Z" level=info msg="ignoring event" container=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.622304660Z" level=info msg="ignoring event" container=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622827062Z" level=warning msg="cleaning up after shim disconnected" id=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.623203064Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622314560Z" level=info msg="shim disconnected" id=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.624293868Z" level=warning msg="cleaning up after shim disconnected" id=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.624306868Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622381461Z" level=info msg="shim disconnected" id=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.631193795Z" level=warning msg="cleaning up after shim disconnected" id=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.631249695Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.667400535Z" level=info msg="ignoring event" container=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.669623644Z" level=info msg="shim disconnected" id=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.672188454Z" level=warning msg="cleaning up after shim disconnected" id=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.672924657Z" level=info msg="ignoring event" container=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.673767960Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.681394990Z" level=info msg="ignoring event" container=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.681607190Z" level=info msg="ignoring event" container=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.681903492Z" level=info msg="shim disconnected" id=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.685272405Z" level=warning msg="cleaning up after shim disconnected" id=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.685407505Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.671723952Z" level=info msg="shim disconnected" id=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.693693137Z" level=warning msg="cleaning up after shim disconnected" id=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.693789338Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697563052Z" level=info msg="shim disconnected" id=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697641053Z" level=warning msg="cleaning up after shim disconnected" id=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697654453Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.725345060Z" level=info msg="ignoring event" container=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.725697262Z" level=info msg="shim disconnected" id=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.725980963Z" level=warning msg="cleaning up after shim disconnected" id=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.726206964Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.734018694Z" level=info msg="ignoring event" container=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.736798905Z" level=info msg="shim disconnected" id=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.737017505Z" level=warning msg="cleaning up after shim disconnected" id=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.737255906Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:32 functional-618200 dockerd[1456]: time="2025-04-08T23:09:32.552363388Z" level=info msg="ignoring event" container=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556138103Z" level=info msg="shim disconnected" id=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c namespace=moby
	Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556756905Z" level=warning msg="cleaning up after shim disconnected" id=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c namespace=moby
	Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556921006Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.565876302Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.643029581Z" level=info msg="ignoring event" container=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.646699056Z" level=info msg="shim disconnected" id=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.647140153Z" level=warning msg="cleaning up after shim disconnected" id=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.647214253Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724363532Z" level=info msg="Daemon shutdown complete"
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724563130Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724658330Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724794029Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 08 23:09:38 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	Apr 08 23:09:38 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:09:38 functional-618200 systemd[1]: docker.service: Consumed 4.925s CPU time.
	Apr 08 23:09:38 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:09:38 functional-618200 dockerd[3978]: time="2025-04-08T23:09:38.782261701Z" level=info msg="Starting up"
	Apr 08 23:10:38 functional-618200 dockerd[3978]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:10:38 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:10:38 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:10:38 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0408 23:10:38.868272   12728 out.go:270] * 
	W0408 23:10:38.869805   12728 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 23:10:38.876775   12728 out.go:201] 
	
	
	==> Docker <==
	Apr 08 23:32:44 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:32:44Z" level=error msg="error getting RW layer size for container ID 'd1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:32:44 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:32:44Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee'"
	Apr 08 23:32:44 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 23.
	Apr 08 23:32:44 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:32:44 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:32:44 functional-618200 dockerd[9759]: time="2025-04-08T23:32:44.477366695Z" level=info msg="Starting up"
	Apr 08 23:33:44 functional-618200 dockerd[9759]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:33:44 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:33:44Z" level=error msg="error getting RW layer size for container ID 'b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:33:44 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:33:44Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc'"
	Apr 08 23:33:44 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:33:44Z" level=error msg="error getting RW layer size for container ID 'd1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:33:44 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:33:44Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee'"
	Apr 08 23:33:44 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:33:44Z" level=error msg="error getting RW layer size for container ID 'e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:33:44 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:33:44Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c'"
	Apr 08 23:33:44 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:33:44Z" level=error msg="error getting RW layer size for container ID 'a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:33:44 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:33:44 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:33:44Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa'"
	Apr 08 23:33:44 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:33:44Z" level=error msg="error getting RW layer size for container ID 'bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:33:44 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:33:44Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f'"
	Apr 08 23:33:44 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:33:44Z" level=error msg="error getting RW layer size for container ID 'd4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:33:44 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:33:44Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61'"
	Apr 08 23:33:44 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:33:44Z" level=error msg="error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Apr 08 23:33:44 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:33:44 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:33:44 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:33:44Z" level=error msg="error getting RW layer size for container ID '48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:33:44 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:33:44Z" level=error msg="Set backoffDuration to : 1m0s for container ID '48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245'"
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2025-04-08T23:33:44Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unknown desc = failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr 8 23:07] systemd-fstab-generator[1018]: Ignoring "noauto" option for root device
	[  +0.094636] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.537665] systemd-fstab-generator[1057]: Ignoring "noauto" option for root device
	[  +0.198814] systemd-fstab-generator[1069]: Ignoring "noauto" option for root device
	[  +0.229826] systemd-fstab-generator[1083]: Ignoring "noauto" option for root device
	[  +2.846583] systemd-fstab-generator[1309]: Ignoring "noauto" option for root device
	[  +0.173620] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	[  +0.175758] systemd-fstab-generator[1333]: Ignoring "noauto" option for root device
	[  +0.246052] systemd-fstab-generator[1348]: Ignoring "noauto" option for root device
	[  +8.663048] systemd-fstab-generator[1449]: Ignoring "noauto" option for root device
	[  +0.103326] kauditd_printk_skb: 206 callbacks suppressed
	[  +5.045655] kauditd_printk_skb: 24 callbacks suppressed
	[  +0.759487] systemd-fstab-generator[1705]: Ignoring "noauto" option for root device
	[  +6.800944] systemd-fstab-generator[1860]: Ignoring "noauto" option for root device
	[  +0.086630] kauditd_printk_skb: 40 callbacks suppressed
	[  +8.016757] systemd-fstab-generator[2285]: Ignoring "noauto" option for root device
	[  +0.140038] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.396453] systemd-fstab-generator[2387]: Ignoring "noauto" option for root device
	[  +0.210902] kauditd_printk_skb: 12 callbacks suppressed
	[Apr 8 23:08] kauditd_printk_skb: 71 callbacks suppressed
	[Apr 8 23:09] systemd-fstab-generator[3506]: Ignoring "noauto" option for root device
	[  +0.614168] systemd-fstab-generator[3549]: Ignoring "noauto" option for root device
	[  +0.260567] systemd-fstab-generator[3561]: Ignoring "noauto" option for root device
	[  +0.277633] systemd-fstab-generator[3575]: Ignoring "noauto" option for root device
	[  +5.335755] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 23:34:44 up 29 min,  0 users,  load average: 0.00, 0.00, 0.00
	Linux functional-618200 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 08 23:34:43 functional-618200 kubelet[2292]: I0408 23:34:43.984332    2292 status_manager.go:890] "Failed to get status for pod" podUID="2d86200df590720b9ed4835cb131ef10" pod="kube-system/kube-scheduler-functional-618200" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-618200\": dial tcp 192.168.113.37:8441: connect: connection refused"
	Apr 08 23:34:44 functional-618200 kubelet[2292]: E0408 23:34:44.702697    2292 log.go:32] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:34:44 functional-618200 kubelet[2292]: E0408 23:34:44.702883    2292 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:34:44 functional-618200 kubelet[2292]: E0408 23:34:44.702832    2292 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 08 23:34:44 functional-618200 kubelet[2292]: E0408 23:34:44.704023    2292 kuberuntime_sandbox.go:305] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:34:44 functional-618200 kubelet[2292]: E0408 23:34:44.704120    2292 generic.go:256] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:34:44 functional-618200 kubelet[2292]: E0408 23:34:44.703006    2292 kubelet.go:3018] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:34:44 functional-618200 kubelet[2292]: E0408 23:34:44.703971    2292 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 08 23:34:44 functional-618200 kubelet[2292]: E0408 23:34:44.704180    2292 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:34:44 functional-618200 kubelet[2292]: I0408 23:34:44.704193    2292 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:34:44 functional-618200 kubelet[2292]: E0408 23:34:44.702788    2292 log.go:32] "Version from runtime service failed" err="rpc error: code = Unknown desc = failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Apr 08 23:34:44 functional-618200 kubelet[2292]: I0408 23:34:44.704262    2292 setters.go:602] "Node became not ready" node="functional-618200" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-04-08T23:34:44Z","lastTransitionTime":"2025-04-08T23:34:44Z","reason":"KubeletNotReady","message":"[container runtime is down, PLEG is not healthy: pleg was last seen active 25m17.694284027s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @-\u003e/var/run/docker.sock: read: connection reset by peer]"}
	Apr 08 23:34:44 functional-618200 kubelet[2292]: E0408 23:34:44.702897    2292 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 08 23:34:44 functional-618200 kubelet[2292]: E0408 23:34:44.704584    2292 kuberuntime_container.go:508] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:34:44 functional-618200 kubelet[2292]: E0408 23:34:44.702918    2292 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 08 23:34:44 functional-618200 kubelet[2292]: E0408 23:34:44.705645    2292 container_log_manager.go:197] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:34:44 functional-618200 kubelet[2292]: E0408 23:34:44.706907    2292 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 08 23:34:44 functional-618200 kubelet[2292]: E0408 23:34:44.707593    2292 kuberuntime_container.go:508] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Apr 08 23:34:44 functional-618200 kubelet[2292]: E0408 23:34:44.709854    2292 kubelet.go:1529] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Apr 08 23:34:44 functional-618200 kubelet[2292]: E0408 23:34:44.710676    2292 kubelet_node_status.go:549] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-04-08T23:34:44Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-04-08T23:34:44Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-04-08T23:34:44Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-04-08T23:34:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-04-08T23:34:44Z\\\",\\\"message\\\":\\\"[container runtime is down, PLEG is not healthy: pleg was last seen active 25m17.694284027s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to
get docker version: failed to get docker version from dockerd: error during connect: Get \\\\\\\"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\\\\\\\": read unix @-\\\\u003e/var/run/docker.sock: read: connection reset by peer]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"containerRuntimeVersion\\\":\\\"docker://Unknown\\\"}}}\" for node \"functional-618200\": Patch \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-618200/status?timeout=10s\": dial tcp 192.168.113.37:8441: connect: connection refused"
	Apr 08 23:34:44 functional-618200 kubelet[2292]: E0408 23:34:44.714175    2292 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"functional-618200\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-618200?timeout=10s\": dial tcp 192.168.113.37:8441: connect: connection refused"
	Apr 08 23:34:44 functional-618200 kubelet[2292]: E0408 23:34:44.717708    2292 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"functional-618200\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-618200?timeout=10s\": dial tcp 192.168.113.37:8441: connect: connection refused"
	Apr 08 23:34:44 functional-618200 kubelet[2292]: E0408 23:34:44.722050    2292 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"functional-618200\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-618200?timeout=10s\": dial tcp 192.168.113.37:8441: connect: connection refused"
	Apr 08 23:34:44 functional-618200 kubelet[2292]: E0408 23:34:44.722793    2292 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"functional-618200\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-618200?timeout=10s\": dial tcp 192.168.113.37:8441: connect: connection refused"
	Apr 08 23:34:44 functional-618200 kubelet[2292]: E0408 23:34:44.722917    2292 kubelet_node_status.go:536] "Unable to update node status" err="update node status exceeds retry count"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0408 23:32:44.172100    9904 logs.go:279] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:32:44.206385    9904 logs.go:279] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:32:44.245957    9904 logs.go:279] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:32:44.280633    9904 logs.go:279] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:32:44.312028    9904 logs.go:279] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:32:44.342708    9904 logs.go:279] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:32:44.377276    9904 logs.go:279] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:33:44.477273    9904 logs.go:279] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-618200 -n functional-618200
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-618200 -n functional-618200: exit status 2 (11.7004596s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-618200" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (180.70s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (361.52s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-618200 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0408 23:36:13.546060    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:774: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-618200 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 90 (2m48.4022342s)

                                                
                                                
-- stdout --
	* [functional-618200] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=20501
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "functional-618200" primary control-plane node in "functional-618200" cluster
	* Updating the running hyperv "functional-618200" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 08 23:06:49 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.094333857Z" level=info msg="Starting up"
	Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.095749501Z" level=info msg="containerd not running, starting managed containerd"
	Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.097506580Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=673
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.128963677Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152469766Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152558876Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152717392Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152739794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152812201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152901110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153079328Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153169038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153187739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153197940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153293950Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153812303Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156561482Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156716198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156848512Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156952822Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.157044531Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.157169744Z" level=info msg="metadata content store policy set" policy=shared
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190389421Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190521734Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190544737Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190560338Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190576740Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190838067Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191154799Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191361820Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191472031Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191493633Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191512135Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191527737Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191541238Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191555639Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191571341Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191603144Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191615846Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191628447Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191749659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191774162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191800364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191815666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191830867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191844669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191857670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191870171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191882273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191897274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191908775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191920677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191932778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191947379Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191967081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191979383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191992484Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192114796Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192196605Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192262611Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192291214Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192304416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192318917Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192331918Z" level=info msg="NRI interface is disabled by configuration."
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193151202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193285015Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193371424Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193820570Z" level=info msg="containerd successfully booted in 0.066941s"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.170474987Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.203429127Z" level=info msg="Loading containers: start."
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.350665658Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.583414712Z" level=info msg="Loading containers: done."
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.608611503Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.608776419Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.609056647Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.609260067Z" level=info msg="Daemon has completed initialization"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.713909013Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.714066029Z" level=info msg="API listen on [::]:2376"
	Apr 08 23:06:50 functional-618200 systemd[1]: Started Docker Application Container Engine.
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.811241096Z" level=info msg="Processing signal 'terminated'"
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813084503Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813257403Z" level=info msg="Daemon shutdown complete"
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813288003Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813374004Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 08 23:07:20 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	Apr 08 23:07:21 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	Apr 08 23:07:21 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:07:21 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.861204748Z" level=info msg="Starting up"
	Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.863521556Z" level=info msg="containerd not running, starting managed containerd"
	Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.864856161Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1097
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.891008554Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913514335Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913559535Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913591835Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913605435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913626835Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913637435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913748735Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913963436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913985636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913996836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.914019636Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.914159537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.916995847Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917087147Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917210048Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917295148Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917328148Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917346448Z" level=info msg="metadata content store policy set" policy=shared
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917634649Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917741950Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917760750Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917900050Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917914850Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917957150Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918196151Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918327452Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918413452Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918430852Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918442352Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918453152Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918462452Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918473352Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918484552Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918499152Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918509952Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918520052Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918543853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918558553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918568953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918579553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918589553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918609253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918626253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918638253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918657853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918673253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918682953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918692253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918702953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918715553Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918733953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918744753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918754653Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918959554Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919161355Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919325455Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919361655Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919372055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919407356Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919416356Z" level=info msg="NRI interface is disabled by configuration."
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919735157Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919968758Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.920117658Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.920171758Z" level=info msg="containerd successfully booted in 0.029982s"
	Apr 08 23:07:22 functional-618200 dockerd[1091]: time="2025-04-08T23:07:22.908709690Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 08 23:07:22 functional-618200 dockerd[1091]: time="2025-04-08T23:07:22.934950284Z" level=info msg="Loading containers: start."
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.062615440Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.175164242Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.282062124Z" level=info msg="Loading containers: done."
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.305666909Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.305777709Z" level=info msg="Daemon has completed initialization"
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.341856738Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 08 23:07:23 functional-618200 systemd[1]: Started Docker Application Container Engine.
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.343491744Z" level=info msg="API listen on [::]:2376"
	Apr 08 23:07:32 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.905143108Z" level=info msg="Processing signal 'terminated'"
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906371813Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906906114Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.907033815Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906918515Z" level=info msg="Daemon shutdown complete"
	Apr 08 23:07:33 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	Apr 08 23:07:33 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:07:33 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.955484761Z" level=info msg="Starting up"
	Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.957042767Z" level=info msg="containerd not running, starting managed containerd"
	Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.958462672Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1462
	Apr 08 23:07:33 functional-618200 dockerd[1462]: time="2025-04-08T23:07:33.983507761Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009132353Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009242353Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009307753Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009324953Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009354454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009383954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009545254Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009658655Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009680555Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009691855Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009717555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.010024356Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012580665Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012671765Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012945166Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013039867Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013070567Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013104967Z" level=info msg="metadata content store policy set" policy=shared
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013460968Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013562869Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013583269Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013598369Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013611569Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013659269Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014010570Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014156471Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014247371Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014266571Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014280071Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014397172Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014425272Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014441672Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014458272Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014472772Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014498972Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014515572Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014537972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014555672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014570972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014585972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014601072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014615672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014629372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014643572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014658573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014679173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014709673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014738473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014783273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014916873Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014942274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014955574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014969174Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015051774Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015092874Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015107074Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015122374Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015133174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015147174Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015158874Z" level=info msg="NRI interface is disabled by configuration."
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015573476Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015638476Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015690176Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015715476Z" level=info msg="containerd successfully booted in 0.033079s"
	Apr 08 23:07:35 functional-618200 dockerd[1456]: time="2025-04-08T23:07:35.262471031Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 08 23:07:37 functional-618200 dockerd[1456]: time="2025-04-08T23:07:37.762713164Z" level=info msg="Loading containers: start."
	Apr 08 23:07:37 functional-618200 dockerd[1456]: time="2025-04-08T23:07:37.897446846Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.015338367Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.153824862Z" level=info msg="Loading containers: done."
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.182692065Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.182937366Z" level=info msg="Daemon has completed initialization"
	Apr 08 23:07:38 functional-618200 systemd[1]: Started Docker Application Container Engine.
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.220981402Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.221045402Z" level=info msg="API listen on [::]:2376"
	Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928174323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928255628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928274329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928976471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011163114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011256119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011273420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011437330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.047888267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048098278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048281989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048657110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089143872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089470391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089714404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.090374541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.331240402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.331940241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.332248459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.332901095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587350115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587733437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587951349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.588255466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643351545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643476652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643513354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643620460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681369670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681570881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681658686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.682028307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.094044455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.094486867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.095561595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.097530446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394114311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394433319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394665025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.395349443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643182806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643370211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643392711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.645053352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216296816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216387017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216402117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216977424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.540620784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.540963288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.541044989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.541180590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.848480641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.850292361Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.850566464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.851150170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.385762643Z" level=info msg="Processing signal 'terminated'"
	Apr 08 23:09:27 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574335274Z" level=info msg="shim disconnected" id=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574507675Z" level=warning msg="cleaning up after shim disconnected" id=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574520575Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.575374478Z" level=info msg="ignoring event" container=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.602965785Z" level=info msg="ignoring event" container=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.603895489Z" level=info msg="shim disconnected" id=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.604175090Z" level=warning msg="cleaning up after shim disconnected" id=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.604242890Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614380530Z" level=info msg="shim disconnected" id=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614605231Z" level=warning msg="cleaning up after shim disconnected" id=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614742231Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.620402053Z" level=info msg="ignoring event" container=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.620802455Z" level=info msg="shim disconnected" id=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.621015255Z" level=info msg="ignoring event" container=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.621947059Z" level=info msg="ignoring event" container=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.622304660Z" level=info msg="ignoring event" container=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622827062Z" level=warning msg="cleaning up after shim disconnected" id=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.623203064Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622314560Z" level=info msg="shim disconnected" id=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.624293868Z" level=warning msg="cleaning up after shim disconnected" id=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.624306868Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622381461Z" level=info msg="shim disconnected" id=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.631193795Z" level=warning msg="cleaning up after shim disconnected" id=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.631249695Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.667400535Z" level=info msg="ignoring event" container=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.669623644Z" level=info msg="shim disconnected" id=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.672188454Z" level=warning msg="cleaning up after shim disconnected" id=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.672924657Z" level=info msg="ignoring event" container=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.673767960Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.681394990Z" level=info msg="ignoring event" container=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.681607190Z" level=info msg="ignoring event" container=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.681903492Z" level=info msg="shim disconnected" id=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.685272405Z" level=warning msg="cleaning up after shim disconnected" id=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.685407505Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.671723952Z" level=info msg="shim disconnected" id=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.693693137Z" level=warning msg="cleaning up after shim disconnected" id=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.693789338Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697563052Z" level=info msg="shim disconnected" id=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697641053Z" level=warning msg="cleaning up after shim disconnected" id=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697654453Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.725345060Z" level=info msg="ignoring event" container=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.725697262Z" level=info msg="shim disconnected" id=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.725980963Z" level=warning msg="cleaning up after shim disconnected" id=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.726206964Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.734018694Z" level=info msg="ignoring event" container=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.736798905Z" level=info msg="shim disconnected" id=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.737017505Z" level=warning msg="cleaning up after shim disconnected" id=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.737255906Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:32 functional-618200 dockerd[1456]: time="2025-04-08T23:09:32.552363388Z" level=info msg="ignoring event" container=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556138103Z" level=info msg="shim disconnected" id=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c namespace=moby
	Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556756905Z" level=warning msg="cleaning up after shim disconnected" id=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c namespace=moby
	Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556921006Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.565876302Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.643029581Z" level=info msg="ignoring event" container=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.646699056Z" level=info msg="shim disconnected" id=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.647140153Z" level=warning msg="cleaning up after shim disconnected" id=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.647214253Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724363532Z" level=info msg="Daemon shutdown complete"
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724563130Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724658330Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724794029Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 08 23:09:38 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	Apr 08 23:09:38 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:09:38 functional-618200 systemd[1]: docker.service: Consumed 4.925s CPU time.
	Apr 08 23:09:38 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:09:38 functional-618200 dockerd[3978]: time="2025-04-08T23:09:38.782261701Z" level=info msg="Starting up"
	Apr 08 23:10:38 functional-618200 dockerd[3978]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:10:38 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:10:38 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:10:38 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:10:38 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Apr 08 23:10:38 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:10:38 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:10:38 functional-618200 dockerd[4187]: time="2025-04-08T23:10:38.990065142Z" level=info msg="Starting up"
	Apr 08 23:11:39 functional-618200 dockerd[4187]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:11:39 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:11:39 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:11:39 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:11:39 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Apr 08 23:11:39 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:11:39 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:11:39 functional-618200 dockerd[4495]: time="2025-04-08T23:11:39.240374985Z" level=info msg="Starting up"
	Apr 08 23:12:39 functional-618200 dockerd[4495]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:12:39 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:12:39 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:12:39 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:12:39 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
	Apr 08 23:12:39 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:12:39 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:12:39 functional-618200 dockerd[4717]: time="2025-04-08T23:12:39.435825366Z" level=info msg="Starting up"
	Apr 08 23:13:39 functional-618200 dockerd[4717]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:13:39 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:13:39 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:13:39 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:13:39 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 4.
	Apr 08 23:13:39 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:13:39 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:13:39 functional-618200 dockerd[4937]: time="2025-04-08T23:13:39.647599381Z" level=info msg="Starting up"
	Apr 08 23:14:39 functional-618200 dockerd[4937]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:14:39 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:14:39 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:14:39 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:14:39 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 5.
	Apr 08 23:14:39 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:14:39 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:14:39 functional-618200 dockerd[5287]: time="2025-04-08T23:14:39.994059486Z" level=info msg="Starting up"
	Apr 08 23:15:40 functional-618200 dockerd[5287]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:15:40 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:15:40 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:15:40 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:15:40 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 6.
	Apr 08 23:15:40 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:15:40 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:15:40 functional-618200 dockerd[5511]: time="2025-04-08T23:15:40.241827213Z" level=info msg="Starting up"
	Apr 08 23:16:40 functional-618200 dockerd[5511]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:16:40 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:16:40 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:16:40 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:16:40 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 7.
	Apr 08 23:16:40 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:16:40 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:16:40 functional-618200 dockerd[5774]: time="2025-04-08T23:16:40.479744325Z" level=info msg="Starting up"
	Apr 08 23:17:40 functional-618200 dockerd[5774]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:17:40 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:17:40 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:17:40 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:17:40 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 8.
	Apr 08 23:17:40 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:17:40 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:17:40 functional-618200 dockerd[6010]: time="2025-04-08T23:17:40.734060234Z" level=info msg="Starting up"
	Apr 08 23:18:40 functional-618200 dockerd[6010]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:18:40 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:18:40 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:18:40 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:18:40 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 9.
	Apr 08 23:18:40 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:18:40 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:18:40 functional-618200 dockerd[6233]: time="2025-04-08T23:18:40.980938832Z" level=info msg="Starting up"
	Apr 08 23:19:41 functional-618200 dockerd[6233]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:19:41 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:19:41 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:19:41 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:19:41 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 10.
	Apr 08 23:19:41 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:19:41 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:19:41 functional-618200 dockerd[6451]: time="2025-04-08T23:19:41.243144928Z" level=info msg="Starting up"
	Apr 08 23:20:41 functional-618200 dockerd[6451]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:20:41 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:20:41 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:20:41 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:20:41 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 11.
	Apr 08 23:20:41 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:20:41 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:20:41 functional-618200 dockerd[6677]: time="2025-04-08T23:20:41.482548376Z" level=info msg="Starting up"
	Apr 08 23:21:41 functional-618200 dockerd[6677]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:21:41 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:21:41 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:21:41 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:21:41 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 12.
	Apr 08 23:21:41 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:21:41 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:21:41 functional-618200 dockerd[6897]: time="2025-04-08T23:21:41.739358273Z" level=info msg="Starting up"
	Apr 08 23:22:41 functional-618200 dockerd[6897]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:22:41 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:22:41 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:22:41 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:22:41 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 13.
	Apr 08 23:22:41 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:22:41 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:22:41 functional-618200 dockerd[7137]: time="2025-04-08T23:22:41.989317104Z" level=info msg="Starting up"
	Apr 08 23:23:42 functional-618200 dockerd[7137]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:23:42 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:23:42 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:23:42 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:23:42 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 14.
	Apr 08 23:23:42 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:23:42 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:23:42 functional-618200 dockerd[7388]: time="2025-04-08T23:23:42.246986404Z" level=info msg="Starting up"
	Apr 08 23:24:42 functional-618200 dockerd[7388]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:24:42 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:24:42 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:24:42 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:24:42 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 15.
	Apr 08 23:24:42 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:24:42 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:24:42 functional-618200 dockerd[7634]: time="2025-04-08T23:24:42.498712284Z" level=info msg="Starting up"
	Apr 08 23:25:42 functional-618200 dockerd[7634]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:25:42 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:25:42 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:25:42 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:25:42 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 16.
	Apr 08 23:25:42 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:25:42 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:25:42 functional-618200 dockerd[7865]: time="2025-04-08T23:25:42.733372335Z" level=info msg="Starting up"
	Apr 08 23:26:42 functional-618200 dockerd[7865]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:26:42 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:26:42 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:26:42 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:26:42 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 17.
	Apr 08 23:26:42 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:26:42 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:26:42 functional-618200 dockerd[8184]: time="2025-04-08T23:26:42.990759238Z" level=info msg="Starting up"
	Apr 08 23:27:43 functional-618200 dockerd[8184]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:27:43 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:27:43 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:27:43 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:27:43 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 18.
	Apr 08 23:27:43 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:27:43 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:27:43 functional-618200 dockerd[8413]: time="2025-04-08T23:27:43.200403383Z" level=info msg="Starting up"
	Apr 08 23:28:43 functional-618200 dockerd[8413]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:28:43 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:28:43 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:28:43 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:28:43 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 19.
	Apr 08 23:28:43 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:28:43 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:28:43 functional-618200 dockerd[8626]: time="2025-04-08T23:28:43.448813456Z" level=info msg="Starting up"
	Apr 08 23:29:43 functional-618200 dockerd[8626]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:29:43 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:29:43 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:29:43 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:29:43 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 20.
	Apr 08 23:29:43 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:29:43 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:29:43 functional-618200 dockerd[8971]: time="2025-04-08T23:29:43.729262267Z" level=info msg="Starting up"
	Apr 08 23:30:43 functional-618200 dockerd[8971]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:30:43 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:30:43 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:30:43 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:30:43 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 21.
	Apr 08 23:30:43 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:30:43 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:30:43 functional-618200 dockerd[9191]: time="2025-04-08T23:30:43.933489137Z" level=info msg="Starting up"
	Apr 08 23:31:43 functional-618200 dockerd[9191]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:31:43 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:31:43 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:31:43 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:31:44 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 22.
	Apr 08 23:31:44 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:31:44 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:31:44 functional-618200 dockerd[9408]: time="2025-04-08T23:31:44.168816618Z" level=info msg="Starting up"
	Apr 08 23:32:44 functional-618200 dockerd[9408]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:32:44 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:32:44 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:32:44 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:32:44 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 23.
	Apr 08 23:32:44 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:32:44 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:32:44 functional-618200 dockerd[9759]: time="2025-04-08T23:32:44.477366695Z" level=info msg="Starting up"
	Apr 08 23:33:44 functional-618200 dockerd[9759]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:33:44 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:33:44 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:33:44 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:33:44 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 24.
	Apr 08 23:33:44 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:33:44 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:33:44 functional-618200 dockerd[9976]: time="2025-04-08T23:33:44.668897222Z" level=info msg="Starting up"
	Apr 08 23:34:44 functional-618200 dockerd[9976]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:34:44 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:34:44 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:34:44 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:34:44 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 25.
	Apr 08 23:34:44 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:34:44 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:34:44 functional-618200 dockerd[10189]: time="2025-04-08T23:34:44.897317954Z" level=info msg="Starting up"
	Apr 08 23:35:44 functional-618200 dockerd[10189]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:35:44 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:35:44 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:35:44 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:35:45 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 26.
	Apr 08 23:35:45 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:35:45 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:35:45 functional-618200 dockerd[10580]: time="2025-04-08T23:35:45.235219924Z" level=info msg="Starting up"
	Apr 08 23:36:13 functional-618200 dockerd[10580]: time="2025-04-08T23:36:13.466116044Z" level=info msg="Processing signal 'terminated'"
	Apr 08 23:36:45 functional-618200 dockerd[10580]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:36:45 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:36:45 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:36:45 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:36:45 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:36:45 functional-618200 dockerd[11011]: time="2025-04-08T23:36:45.327202140Z" level=info msg="Starting up"
	Apr 08 23:37:45 functional-618200 dockerd[11011]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:37:45 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:37:45 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:37:45 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:776: failed to restart minikube. args "out/minikube-windows-amd64.exe start -p functional-618200 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 90
functional_test.go:778: restart took 2m48.5196328s for "functional-618200" cluster.
I0408 23:37:45.580421    9864 config.go:182] Loaded profile config "functional-618200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-618200 -n functional-618200
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-618200 -n functional-618200: exit status 2 (11.9829399s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-618200 logs -n 25
E0408 23:38:10.464332    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-618200 logs -n 25: (2m48.6386396s)
helpers_test.go:252: TestFunctional/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| unpause | nospam-268300 --log_dir                                                  | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:03 UTC | 08 Apr 25 23:03 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| unpause | nospam-268300 --log_dir                                                  | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:03 UTC | 08 Apr 25 23:03 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| unpause | nospam-268300 --log_dir                                                  | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:03 UTC | 08 Apr 25 23:03 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-268300 --log_dir                                                  | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:03 UTC | 08 Apr 25 23:04 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-268300 --log_dir                                                  | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:04 UTC | 08 Apr 25 23:04 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-268300 --log_dir                                                  | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:04 UTC | 08 Apr 25 23:04 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| delete  | -p nospam-268300                                                         | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:04 UTC | 08 Apr 25 23:04 UTC |
	| start   | -p functional-618200                                                     | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:04 UTC | 08 Apr 25 23:08 UTC |
	|         | --memory=4000                                                            |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                               |                   |                   |         |                     |                     |
	| start   | -p functional-618200                                                     | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:08 UTC |                     |
	|         | --alsologtostderr -v=8                                                   |                   |                   |         |                     |                     |
	| cache   | functional-618200 cache add                                              | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:15 UTC | 08 Apr 25 23:17 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache   | functional-618200 cache add                                              | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:17 UTC | 08 Apr 25 23:19 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |         |                     |                     |
	| cache   | functional-618200 cache add                                              | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:19 UTC | 08 Apr 25 23:21 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | functional-618200 cache add                                              | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:21 UTC | 08 Apr 25 23:22 UTC |
	|         | minikube-local-cache-test:functional-618200                              |                   |                   |         |                     |                     |
	| cache   | functional-618200 cache delete                                           | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:22 UTC | 08 Apr 25 23:22 UTC |
	|         | minikube-local-cache-test:functional-618200                              |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:22 UTC | 08 Apr 25 23:22 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |         |                     |                     |
	| cache   | list                                                                     | minikube          | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:22 UTC | 08 Apr 25 23:22 UTC |
	| ssh     | functional-618200 ssh sudo                                               | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:22 UTC |                     |
	|         | crictl images                                                            |                   |                   |         |                     |                     |
	| ssh     | functional-618200                                                        | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:22 UTC |                     |
	|         | ssh sudo docker rmi                                                      |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| ssh     | functional-618200 ssh                                                    | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:23 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | functional-618200 cache reload                                           | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:23 UTC | 08 Apr 25 23:25 UTC |
	| ssh     | functional-618200 ssh                                                    | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:25 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:25 UTC | 08 Apr 25 23:25 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:25 UTC | 08 Apr 25 23:25 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| kubectl | functional-618200 kubectl --                                             | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:28 UTC |                     |
	|         | --context functional-618200                                              |                   |                   |         |                     |                     |
	|         | get pods                                                                 |                   |                   |         |                     |                     |
	| start   | -p functional-618200                                                     | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:34 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |                   |         |                     |                     |
	|         | --wait=all                                                               |                   |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/08 23:34:57
	Running on machine: minikube6
	Binary: Built with gc go1.24.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 23:34:57.160655    4680 out.go:345] Setting OutFile to fd 1364 ...
	I0408 23:34:57.227306    4680 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 23:34:57.227306    4680 out.go:358] Setting ErrFile to fd 1372...
	I0408 23:34:57.227306    4680 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 23:34:57.246367    4680 out.go:352] Setting JSON to false
	I0408 23:34:57.249336    4680 start.go:129] hostinfo: {"hostname":"minikube6","uptime":12294,"bootTime":1744143002,"procs":175,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5679 Build 19045.5679","kernelVersion":"10.0.19045.5679 Build 19045.5679","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0408 23:34:57.250337    4680 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 23:34:57.254337    4680 out.go:177] * [functional-618200] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
	I0408 23:34:57.259337    4680 notify.go:220] Checking for updates...
	I0408 23:34:57.259337    4680 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0408 23:34:57.262420    4680 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 23:34:57.265869    4680 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0408 23:34:57.268764    4680 out.go:177]   - MINIKUBE_LOCATION=20501
	I0408 23:34:57.271844    4680 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 23:34:57.274915    4680 config.go:182] Loaded profile config "functional-618200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0408 23:34:57.275745    4680 driver.go:404] Setting default libvirt URI to qemu:///system
	I0408 23:35:02.492013    4680 out.go:177] * Using the hyperv driver based on existing profile
	I0408 23:35:02.497227    4680 start.go:297] selected driver: hyperv
	I0408 23:35:02.497227    4680 start.go:901] validating driver "hyperv" against &{Name:functional-618200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 Clust
erName:functional-618200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.113.37 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 23:35:02.497227    4680 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 23:35:02.546322    4680 start_flags.go:975] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 23:35:02.546322    4680 cni.go:84] Creating CNI manager for ""
	I0408 23:35:02.547269    4680 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 23:35:02.547269    4680 start.go:340] cluster config:
	{Name:functional-618200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-618200 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.113.37 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docke
r MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 23:35:02.547269    4680 iso.go:125] acquiring lock: {Name:mk49322cc4182124f5e9cd1631076166a7ff463d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 23:35:02.554319    4680 out.go:177] * Starting "functional-618200" primary control-plane node in "functional-618200" cluster
	I0408 23:35:02.556281    4680 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0408 23:35:02.557271    4680 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
	I0408 23:35:02.557271    4680 cache.go:56] Caching tarball of preloaded images
	I0408 23:35:02.557271    4680 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0408 23:35:02.557271    4680 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0408 23:35:02.557271    4680 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-618200\config.json ...
	I0408 23:35:02.560231    4680 start.go:360] acquireMachinesLock for functional-618200: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 23:35:02.560231    4680 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-618200"
	I0408 23:35:02.560231    4680 start.go:96] Skipping create...Using existing machine configuration
	I0408 23:35:02.560231    4680 fix.go:54] fixHost starting: 
	I0408 23:35:02.560231    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:35:05.222913    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:35:05.222913    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:05.222999    4680 fix.go:112] recreateIfNeeded on functional-618200: state=Running err=<nil>
	W0408 23:35:05.222999    4680 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 23:35:05.225946    4680 out.go:177] * Updating the running hyperv "functional-618200" VM ...
	I0408 23:35:05.230009    4680 machine.go:93] provisionDockerMachine start ...
	I0408 23:35:05.230204    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:35:07.291911    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:35:07.292084    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:07.292225    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:35:09.764896    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:35:09.764896    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:09.772026    4680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:35:09.772916    4680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:35:09.772916    4680 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 23:35:09.909726    4680 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-618200
	
	I0408 23:35:09.909912    4680 buildroot.go:166] provisioning hostname "functional-618200"
	I0408 23:35:09.909912    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:35:11.997581    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:35:11.997581    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:11.998187    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:35:14.437911    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:35:14.437911    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:14.443507    4680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:35:14.444263    4680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:35:14.444331    4680 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-618200 && echo "functional-618200" | sudo tee /etc/hostname
	I0408 23:35:14.603359    4680 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-618200
	
	I0408 23:35:14.603469    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:35:16.670523    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:35:16.671534    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:16.671557    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:35:19.147238    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:35:19.147238    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:19.153778    4680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:35:19.154064    4680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:35:19.154064    4680 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-618200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-618200/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-618200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 23:35:19.293655    4680 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 23:35:19.293818    4680 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0408 23:35:19.293818    4680 buildroot.go:174] setting up certificates
	I0408 23:35:19.293918    4680 provision.go:84] configureAuth start
	I0408 23:35:19.293918    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:35:21.418011    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:35:21.418011    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:21.418011    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:35:23.915067    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:35:23.915750    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:23.915843    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:35:26.054110    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:35:26.054110    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:26.054245    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:35:28.570897    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:35:28.570897    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:28.570979    4680 provision.go:143] copyHostCerts
	I0408 23:35:28.571441    4680 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0408 23:35:28.571441    4680 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0408 23:35:28.572091    4680 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0408 23:35:28.573882    4680 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0408 23:35:28.573882    4680 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0408 23:35:28.574303    4680 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0408 23:35:28.575503    4680 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0408 23:35:28.575503    4680 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0408 23:35:28.575803    4680 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0408 23:35:28.576584    4680 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-618200 san=[127.0.0.1 192.168.113.37 functional-618200 localhost minikube]
	I0408 23:35:28.959411    4680 provision.go:177] copyRemoteCerts
	I0408 23:35:28.968415    4680 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 23:35:28.968415    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:35:31.020048    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:35:31.020048    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:31.020798    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:35:33.462537    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:35:33.462537    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:33.462537    4680 sshutil.go:53] new ssh client: &{IP:192.168.113.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-618200\id_rsa Username:docker}
	I0408 23:35:33.576960    4680 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6084838s)
	I0408 23:35:33.577533    4680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0408 23:35:33.623672    4680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0408 23:35:33.670466    4680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 23:35:33.717400    4680 provision.go:87] duration metric: took 14.4232931s to configureAuth
	I0408 23:35:33.717400    4680 buildroot.go:189] setting minikube options for container-runtime
	I0408 23:35:33.717979    4680 config.go:182] Loaded profile config "functional-618200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0408 23:35:33.718051    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:35:35.820801    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:35:35.821878    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:35.822118    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:35:38.293353    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:35:38.293353    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:38.299330    4680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:35:38.300018    4680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:35:38.300018    4680 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0408 23:35:38.425797    4680 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0408 23:35:38.425797    4680 buildroot.go:70] root file system type: tmpfs
	I0408 23:35:38.426995    4680 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0408 23:35:38.427061    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:35:40.452569    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:35:40.452569    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:40.452796    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:35:42.927371    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:35:42.927371    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:42.934515    4680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:35:42.935261    4680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:35:42.935261    4680 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0408 23:35:43.086612    4680 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0408 23:35:43.086740    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:35:45.178050    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:35:45.178179    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:45.178179    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:35:47.646488    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:35:47.647562    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:47.653138    4680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:35:47.653919    4680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:35:47.653919    4680 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0408 23:35:47.796320    4680 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 23:35:47.796320    4680 machine.go:96] duration metric: took 42.5657539s to provisionDockerMachine
	I0408 23:35:47.796320    4680 start.go:293] postStartSetup for "functional-618200" (driver="hyperv")
	I0408 23:35:47.796508    4680 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 23:35:47.808373    4680 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 23:35:47.808373    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:35:49.907410    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:35:49.907410    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:49.907410    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:35:52.435264    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:35:52.435264    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:52.436078    4680 sshutil.go:53] new ssh client: &{IP:192.168.113.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-618200\id_rsa Username:docker}
	I0408 23:35:52.536680    4680 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7282442s)
	I0408 23:35:52.550709    4680 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 23:35:52.557305    4680 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 23:35:52.557354    4680 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0408 23:35:52.558201    4680 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0408 23:35:52.560040    4680 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> 98642.pem in /etc/ssl/certs
	I0408 23:35:52.561052    4680 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9864\hosts -> hosts in /etc/test/nested/copy/9864
	I0408 23:35:52.572449    4680 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/9864
	I0408 23:35:52.591479    4680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem --> /etc/ssl/certs/98642.pem (1708 bytes)
	I0408 23:35:52.632158    4680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9864\hosts --> /etc/test/nested/copy/9864/hosts (40 bytes)
	I0408 23:35:52.674167    4680 start.go:296] duration metric: took 4.8777819s for postStartSetup
	I0408 23:35:52.674305    4680 fix.go:56] duration metric: took 50.113417s for fixHost
	I0408 23:35:52.674384    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:35:54.767684    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:35:54.767684    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:54.767684    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:35:57.261834    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:35:57.261834    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:57.271187    4680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:35:57.271187    4680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:35:57.271187    4680 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0408 23:35:57.398373    4680 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744155357.426064067
	
	I0408 23:35:57.398373    4680 fix.go:216] guest clock: 1744155357.426064067
	I0408 23:35:57.398373    4680 fix.go:229] Guest: 2025-04-08 23:35:57.426064067 +0000 UTC Remote: 2025-04-08 23:35:52.6743059 +0000 UTC m=+55.594526801 (delta=4.751758167s)
	I0408 23:35:57.398607    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:35:59.476535    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:35:59.476535    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:59.477439    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:36:01.946547    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:36:01.946755    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:36:01.952277    4680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:36:01.952431    4680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:36:01.952431    4680 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1744155357
	I0408 23:36:02.109581    4680 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr  8 23:35:57 UTC 2025
	
	I0408 23:36:02.109581    4680 fix.go:236] clock set: Tue Apr  8 23:35:57 UTC 2025
	 (err=<nil>)
	I0408 23:36:02.109581    4680 start.go:83] releasing machines lock for "functional-618200", held for 59.5485681s
	I0408 23:36:02.110548    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:36:04.180009    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:36:04.180193    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:36:04.180261    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:36:06.679668    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:36:06.679668    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:36:06.684777    4680 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0408 23:36:06.684909    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:36:06.693996    4680 ssh_runner.go:195] Run: cat /version.json
	I0408 23:36:06.693996    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:36:08.902982    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:36:08.903217    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:36:08.903217    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:36:08.911965    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:36:08.911965    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:36:08.911965    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:36:11.559377    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:36:11.559377    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:36:11.560763    4680 sshutil.go:53] new ssh client: &{IP:192.168.113.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-618200\id_rsa Username:docker}
	I0408 23:36:11.579839    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:36:11.579839    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:36:11.580662    4680 sshutil.go:53] new ssh client: &{IP:192.168.113.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-618200\id_rsa Username:docker}
	I0408 23:36:11.653575    4680 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9686214s)
	W0408 23:36:11.653575    4680 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0408 23:36:11.671985    4680 ssh_runner.go:235] Completed: cat /version.json: (4.9779236s)
	I0408 23:36:11.686366    4680 ssh_runner.go:195] Run: systemctl --version
	I0408 23:36:11.708570    4680 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 23:36:11.717906    4680 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 23:36:11.728234    4680 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 23:36:11.747584    4680 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0408 23:36:11.747584    4680 start.go:495] detecting cgroup driver to use...
	I0408 23:36:11.747584    4680 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0408 23:36:11.768904    4680 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0408 23:36:11.768904    4680 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0408 23:36:11.797321    4680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0408 23:36:11.831085    4680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0408 23:36:11.849662    4680 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0408 23:36:11.861888    4680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0408 23:36:11.903580    4680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 23:36:11.943433    4680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0408 23:36:11.977323    4680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 23:36:12.012379    4680 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 23:36:12.046321    4680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0408 23:36:12.079535    4680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0408 23:36:12.110716    4680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0408 23:36:12.147517    4680 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 23:36:12.178928    4680 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 23:36:12.208351    4680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:36:12.410730    4680 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0408 23:36:12.439631    4680 start.go:495] detecting cgroup driver to use...
	I0408 23:36:12.451933    4680 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0408 23:36:12.488014    4680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 23:36:12.521384    4680 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 23:36:12.558160    4680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 23:36:12.599092    4680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0408 23:36:12.621759    4680 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 23:36:12.666043    4680 ssh_runner.go:195] Run: which cri-dockerd
	I0408 23:36:12.683104    4680 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0408 23:36:12.700086    4680 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0408 23:36:12.745200    4680 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0408 23:36:12.942898    4680 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0408 23:36:13.136518    4680 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0408 23:36:13.136518    4680 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0408 23:36:13.182679    4680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:36:13.412451    4680 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0408 23:37:45.325640    4680 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m31.911983s)
	I0408 23:37:45.337425    4680 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0408 23:37:45.409039    4680 out.go:201] 
	W0408 23:37:45.412124    4680 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 08 23:06:49 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.094333857Z" level=info msg="Starting up"
	Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.095749501Z" level=info msg="containerd not running, starting managed containerd"
	Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.097506580Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=673
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.128963677Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152469766Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152558876Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152717392Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152739794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152812201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152901110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153079328Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153169038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153187739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153197940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153293950Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153812303Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156561482Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156716198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156848512Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156952822Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.157044531Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.157169744Z" level=info msg="metadata content store policy set" policy=shared
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190389421Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190521734Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190544737Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190560338Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190576740Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190838067Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191154799Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191361820Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191472031Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191493633Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191512135Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191527737Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191541238Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191555639Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191571341Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191603144Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191615846Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191628447Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191749659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191774162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191800364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191815666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191830867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191844669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191857670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191870171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191882273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191897274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191908775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191920677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191932778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191947379Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191967081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191979383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191992484Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192114796Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192196605Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192262611Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192291214Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192304416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192318917Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192331918Z" level=info msg="NRI interface is disabled by configuration."
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193151202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193285015Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193371424Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193820570Z" level=info msg="containerd successfully booted in 0.066941s"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.170474987Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.203429127Z" level=info msg="Loading containers: start."
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.350665658Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.583414712Z" level=info msg="Loading containers: done."
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.608611503Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.608776419Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.609056647Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.609260067Z" level=info msg="Daemon has completed initialization"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.713909013Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.714066029Z" level=info msg="API listen on [::]:2376"
	Apr 08 23:06:50 functional-618200 systemd[1]: Started Docker Application Container Engine.
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.811241096Z" level=info msg="Processing signal 'terminated'"
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813084503Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813257403Z" level=info msg="Daemon shutdown complete"
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813288003Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813374004Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 08 23:07:20 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	Apr 08 23:07:21 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	Apr 08 23:07:21 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:07:21 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.861204748Z" level=info msg="Starting up"
	Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.863521556Z" level=info msg="containerd not running, starting managed containerd"
	Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.864856161Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1097
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.891008554Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913514335Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913559535Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913591835Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913605435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913626835Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913637435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913748735Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913963436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913985636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913996836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.914019636Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.914159537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.916995847Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917087147Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917210048Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917295148Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917328148Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917346448Z" level=info msg="metadata content store policy set" policy=shared
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917634649Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917741950Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917760750Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917900050Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917914850Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917957150Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918196151Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918327452Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918413452Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918430852Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918442352Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918453152Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918462452Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918473352Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918484552Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918499152Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918509952Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918520052Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918543853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918558553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918568953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918579553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918589553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918609253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918626253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918638253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918657853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918673253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918682953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918692253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918702953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918715553Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918733953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918744753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918754653Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918959554Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919161355Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919325455Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919361655Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919372055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919407356Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919416356Z" level=info msg="NRI interface is disabled by configuration."
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919735157Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919968758Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.920117658Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.920171758Z" level=info msg="containerd successfully booted in 0.029982s"
	Apr 08 23:07:22 functional-618200 dockerd[1091]: time="2025-04-08T23:07:22.908709690Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 08 23:07:22 functional-618200 dockerd[1091]: time="2025-04-08T23:07:22.934950284Z" level=info msg="Loading containers: start."
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.062615440Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.175164242Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.282062124Z" level=info msg="Loading containers: done."
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.305666909Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.305777709Z" level=info msg="Daemon has completed initialization"
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.341856738Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 08 23:07:23 functional-618200 systemd[1]: Started Docker Application Container Engine.
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.343491744Z" level=info msg="API listen on [::]:2376"
	Apr 08 23:07:32 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.905143108Z" level=info msg="Processing signal 'terminated'"
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906371813Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906906114Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.907033815Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906918515Z" level=info msg="Daemon shutdown complete"
	Apr 08 23:07:33 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	Apr 08 23:07:33 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:07:33 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.955484761Z" level=info msg="Starting up"
	Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.957042767Z" level=info msg="containerd not running, starting managed containerd"
	Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.958462672Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1462
	Apr 08 23:07:33 functional-618200 dockerd[1462]: time="2025-04-08T23:07:33.983507761Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009132353Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009242353Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009307753Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009324953Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009354454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009383954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009545254Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009658655Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009680555Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009691855Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009717555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.010024356Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012580665Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012671765Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012945166Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013039867Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013070567Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013104967Z" level=info msg="metadata content store policy set" policy=shared
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013460968Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013562869Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013583269Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013598369Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013611569Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013659269Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014010570Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014156471Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014247371Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014266571Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014280071Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014397172Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014425272Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014441672Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014458272Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014472772Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014498972Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014515572Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014537972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014555672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014570972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014585972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014601072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014615672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014629372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014643572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014658573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014679173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014709673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014738473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014783273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014916873Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014942274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014955574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014969174Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015051774Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015092874Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015107074Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015122374Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015133174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015147174Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015158874Z" level=info msg="NRI interface is disabled by configuration."
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015573476Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015638476Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015690176Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015715476Z" level=info msg="containerd successfully booted in 0.033079s"
	Apr 08 23:07:35 functional-618200 dockerd[1456]: time="2025-04-08T23:07:35.262471031Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 08 23:07:37 functional-618200 dockerd[1456]: time="2025-04-08T23:07:37.762713164Z" level=info msg="Loading containers: start."
	Apr 08 23:07:37 functional-618200 dockerd[1456]: time="2025-04-08T23:07:37.897446846Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.015338367Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.153824862Z" level=info msg="Loading containers: done."
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.182692065Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.182937366Z" level=info msg="Daemon has completed initialization"
	Apr 08 23:07:38 functional-618200 systemd[1]: Started Docker Application Container Engine.
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.220981402Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.221045402Z" level=info msg="API listen on [::]:2376"
	Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928174323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928255628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928274329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928976471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011163114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011256119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011273420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011437330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.047888267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048098278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048281989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048657110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089143872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089470391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089714404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.090374541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.331240402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.331940241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.332248459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.332901095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587350115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587733437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587951349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.588255466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643351545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643476652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643513354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643620460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681369670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681570881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681658686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.682028307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.094044455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.094486867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.095561595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.097530446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394114311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394433319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394665025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.395349443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643182806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643370211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643392711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.645053352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216296816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216387017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216402117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216977424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.540620784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.540963288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.541044989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.541180590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.848480641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.850292361Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.850566464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.851150170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.385762643Z" level=info msg="Processing signal 'terminated'"
	Apr 08 23:09:27 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574335274Z" level=info msg="shim disconnected" id=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574507675Z" level=warning msg="cleaning up after shim disconnected" id=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574520575Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.575374478Z" level=info msg="ignoring event" container=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.602965785Z" level=info msg="ignoring event" container=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.603895489Z" level=info msg="shim disconnected" id=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.604175090Z" level=warning msg="cleaning up after shim disconnected" id=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.604242890Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614380530Z" level=info msg="shim disconnected" id=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614605231Z" level=warning msg="cleaning up after shim disconnected" id=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614742231Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.620402053Z" level=info msg="ignoring event" container=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.620802455Z" level=info msg="shim disconnected" id=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.621015255Z" level=info msg="ignoring event" container=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.621947059Z" level=info msg="ignoring event" container=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.622304660Z" level=info msg="ignoring event" container=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622827062Z" level=warning msg="cleaning up after shim disconnected" id=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.623203064Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622314560Z" level=info msg="shim disconnected" id=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.624293868Z" level=warning msg="cleaning up after shim disconnected" id=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.624306868Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622381461Z" level=info msg="shim disconnected" id=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.631193795Z" level=warning msg="cleaning up after shim disconnected" id=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.631249695Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.667400535Z" level=info msg="ignoring event" container=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.669623644Z" level=info msg="shim disconnected" id=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.672188454Z" level=warning msg="cleaning up after shim disconnected" id=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.672924657Z" level=info msg="ignoring event" container=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.673767960Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.681394990Z" level=info msg="ignoring event" container=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.681607190Z" level=info msg="ignoring event" container=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.681903492Z" level=info msg="shim disconnected" id=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.685272405Z" level=warning msg="cleaning up after shim disconnected" id=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.685407505Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.671723952Z" level=info msg="shim disconnected" id=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.693693137Z" level=warning msg="cleaning up after shim disconnected" id=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.693789338Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697563052Z" level=info msg="shim disconnected" id=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697641053Z" level=warning msg="cleaning up after shim disconnected" id=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697654453Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.725345060Z" level=info msg="ignoring event" container=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.725697262Z" level=info msg="shim disconnected" id=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.725980963Z" level=warning msg="cleaning up after shim disconnected" id=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.726206964Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.734018694Z" level=info msg="ignoring event" container=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.736798905Z" level=info msg="shim disconnected" id=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.737017505Z" level=warning msg="cleaning up after shim disconnected" id=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.737255906Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:32 functional-618200 dockerd[1456]: time="2025-04-08T23:09:32.552363388Z" level=info msg="ignoring event" container=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556138103Z" level=info msg="shim disconnected" id=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c namespace=moby
	Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556756905Z" level=warning msg="cleaning up after shim disconnected" id=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c namespace=moby
	Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556921006Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.565876302Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.643029581Z" level=info msg="ignoring event" container=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.646699056Z" level=info msg="shim disconnected" id=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.647140153Z" level=warning msg="cleaning up after shim disconnected" id=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.647214253Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724363532Z" level=info msg="Daemon shutdown complete"
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724563130Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724658330Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724794029Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 08 23:09:38 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	Apr 08 23:09:38 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:09:38 functional-618200 systemd[1]: docker.service: Consumed 4.925s CPU time.
	Apr 08 23:09:38 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:09:38 functional-618200 dockerd[3978]: time="2025-04-08T23:09:38.782261701Z" level=info msg="Starting up"
	Apr 08 23:10:38 functional-618200 dockerd[3978]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:10:38 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:10:38 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:10:38 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:10:38 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Apr 08 23:10:38 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:10:38 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:10:38 functional-618200 dockerd[4187]: time="2025-04-08T23:10:38.990065142Z" level=info msg="Starting up"
	Apr 08 23:11:39 functional-618200 dockerd[4187]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:11:39 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:11:39 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:11:39 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:11:39 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Apr 08 23:11:39 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:11:39 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:11:39 functional-618200 dockerd[4495]: time="2025-04-08T23:11:39.240374985Z" level=info msg="Starting up"
	Apr 08 23:12:39 functional-618200 dockerd[4495]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:12:39 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:12:39 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:12:39 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:12:39 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
	Apr 08 23:12:39 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:12:39 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:12:39 functional-618200 dockerd[4717]: time="2025-04-08T23:12:39.435825366Z" level=info msg="Starting up"
	Apr 08 23:13:39 functional-618200 dockerd[4717]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:13:39 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:13:39 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:13:39 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:13:39 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 4.
	Apr 08 23:13:39 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:13:39 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:13:39 functional-618200 dockerd[4937]: time="2025-04-08T23:13:39.647599381Z" level=info msg="Starting up"
	Apr 08 23:14:39 functional-618200 dockerd[4937]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:14:39 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:14:39 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:14:39 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:14:39 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 5.
	Apr 08 23:14:39 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:14:39 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:14:39 functional-618200 dockerd[5287]: time="2025-04-08T23:14:39.994059486Z" level=info msg="Starting up"
	Apr 08 23:15:40 functional-618200 dockerd[5287]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:15:40 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:15:40 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:15:40 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:15:40 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 6.
	Apr 08 23:15:40 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:15:40 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:15:40 functional-618200 dockerd[5511]: time="2025-04-08T23:15:40.241827213Z" level=info msg="Starting up"
	Apr 08 23:16:40 functional-618200 dockerd[5511]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:16:40 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:16:40 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:16:40 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:16:40 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 7.
	Apr 08 23:16:40 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:16:40 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:16:40 functional-618200 dockerd[5774]: time="2025-04-08T23:16:40.479744325Z" level=info msg="Starting up"
	Apr 08 23:17:40 functional-618200 dockerd[5774]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:17:40 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:17:40 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:17:40 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:17:40 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 8.
	Apr 08 23:17:40 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:17:40 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:17:40 functional-618200 dockerd[6010]: time="2025-04-08T23:17:40.734060234Z" level=info msg="Starting up"
	Apr 08 23:18:40 functional-618200 dockerd[6010]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:18:40 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:18:40 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:18:40 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:18:40 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 9.
	Apr 08 23:18:40 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:18:40 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:18:40 functional-618200 dockerd[6233]: time="2025-04-08T23:18:40.980938832Z" level=info msg="Starting up"
	Apr 08 23:19:41 functional-618200 dockerd[6233]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:19:41 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:19:41 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:19:41 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:19:41 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 10.
	Apr 08 23:19:41 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:19:41 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:19:41 functional-618200 dockerd[6451]: time="2025-04-08T23:19:41.243144928Z" level=info msg="Starting up"
	Apr 08 23:20:41 functional-618200 dockerd[6451]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:20:41 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:20:41 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:20:41 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:20:41 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 11.
	Apr 08 23:20:41 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:20:41 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:20:41 functional-618200 dockerd[6677]: time="2025-04-08T23:20:41.482548376Z" level=info msg="Starting up"
	Apr 08 23:21:41 functional-618200 dockerd[6677]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:21:41 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:21:41 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:21:41 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:21:41 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 12.
	Apr 08 23:21:41 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:21:41 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:21:41 functional-618200 dockerd[6897]: time="2025-04-08T23:21:41.739358273Z" level=info msg="Starting up"
	Apr 08 23:22:41 functional-618200 dockerd[6897]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:22:41 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:22:41 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:22:41 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:22:41 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 13.
	Apr 08 23:22:41 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:22:41 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:22:41 functional-618200 dockerd[7137]: time="2025-04-08T23:22:41.989317104Z" level=info msg="Starting up"
	Apr 08 23:23:42 functional-618200 dockerd[7137]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:23:42 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:23:42 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:23:42 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:23:42 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 14.
	Apr 08 23:23:42 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:23:42 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:23:42 functional-618200 dockerd[7388]: time="2025-04-08T23:23:42.246986404Z" level=info msg="Starting up"
	Apr 08 23:24:42 functional-618200 dockerd[7388]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:24:42 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:24:42 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:24:42 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:24:42 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 15.
	Apr 08 23:24:42 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:24:42 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:24:42 functional-618200 dockerd[7634]: time="2025-04-08T23:24:42.498712284Z" level=info msg="Starting up"
	Apr 08 23:25:42 functional-618200 dockerd[7634]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:25:42 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:25:42 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:25:42 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:25:42 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 16.
	Apr 08 23:25:42 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:25:42 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:25:42 functional-618200 dockerd[7865]: time="2025-04-08T23:25:42.733372335Z" level=info msg="Starting up"
	Apr 08 23:26:42 functional-618200 dockerd[7865]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:26:42 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:26:42 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:26:42 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:26:42 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 17.
	Apr 08 23:26:42 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:26:42 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:26:42 functional-618200 dockerd[8184]: time="2025-04-08T23:26:42.990759238Z" level=info msg="Starting up"
	Apr 08 23:27:43 functional-618200 dockerd[8184]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:27:43 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:27:43 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:27:43 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:27:43 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 18.
	Apr 08 23:27:43 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:27:43 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:27:43 functional-618200 dockerd[8413]: time="2025-04-08T23:27:43.200403383Z" level=info msg="Starting up"
	Apr 08 23:28:43 functional-618200 dockerd[8413]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:28:43 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:28:43 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:28:43 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:28:43 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 19.
	Apr 08 23:28:43 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:28:43 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:28:43 functional-618200 dockerd[8626]: time="2025-04-08T23:28:43.448813456Z" level=info msg="Starting up"
	Apr 08 23:29:43 functional-618200 dockerd[8626]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:29:43 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:29:43 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:29:43 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:29:43 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 20.
	Apr 08 23:29:43 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:29:43 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:29:43 functional-618200 dockerd[8971]: time="2025-04-08T23:29:43.729262267Z" level=info msg="Starting up"
	Apr 08 23:30:43 functional-618200 dockerd[8971]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:30:43 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:30:43 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:30:43 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:30:43 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 21.
	Apr 08 23:30:43 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:30:43 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:30:43 functional-618200 dockerd[9191]: time="2025-04-08T23:30:43.933489137Z" level=info msg="Starting up"
	Apr 08 23:31:43 functional-618200 dockerd[9191]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:31:43 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:31:43 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:31:43 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:31:44 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 22.
	Apr 08 23:31:44 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:31:44 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:31:44 functional-618200 dockerd[9408]: time="2025-04-08T23:31:44.168816618Z" level=info msg="Starting up"
	Apr 08 23:32:44 functional-618200 dockerd[9408]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:32:44 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:32:44 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:32:44 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:32:44 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 23.
	Apr 08 23:32:44 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:32:44 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:32:44 functional-618200 dockerd[9759]: time="2025-04-08T23:32:44.477366695Z" level=info msg="Starting up"
	Apr 08 23:33:44 functional-618200 dockerd[9759]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:33:44 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:33:44 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:33:44 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:33:44 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 24.
	Apr 08 23:33:44 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:33:44 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:33:44 functional-618200 dockerd[9976]: time="2025-04-08T23:33:44.668897222Z" level=info msg="Starting up"
	Apr 08 23:34:44 functional-618200 dockerd[9976]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:34:44 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:34:44 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:34:44 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:34:44 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 25.
	Apr 08 23:34:44 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:34:44 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:34:44 functional-618200 dockerd[10189]: time="2025-04-08T23:34:44.897317954Z" level=info msg="Starting up"
	Apr 08 23:35:44 functional-618200 dockerd[10189]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:35:44 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:35:44 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:35:44 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:35:45 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 26.
	Apr 08 23:35:45 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:35:45 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:35:45 functional-618200 dockerd[10580]: time="2025-04-08T23:35:45.235219924Z" level=info msg="Starting up"
	Apr 08 23:36:13 functional-618200 dockerd[10580]: time="2025-04-08T23:36:13.466116044Z" level=info msg="Processing signal 'terminated'"
	Apr 08 23:36:45 functional-618200 dockerd[10580]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:36:45 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:36:45 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:36:45 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:36:45 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:36:45 functional-618200 dockerd[11011]: time="2025-04-08T23:36:45.327202140Z" level=info msg="Starting up"
	Apr 08 23:37:45 functional-618200 dockerd[11011]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:37:45 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:37:45 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:37:45 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0408 23:37:45.413293    4680 out.go:270] * 
	W0408 23:37:45.414464    4680 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 23:37:45.421072    4680 out.go:201] 
	
	
	==> Docker <==
	Apr 08 23:38:45 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:38:45 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:38:45 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Apr 08 23:38:45 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:38:45 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:38:45 functional-618200 dockerd[11507]: time="2025-04-08T23:38:45.742257991Z" level=info msg="Starting up"
	Apr 08 23:39:45 functional-618200 dockerd[11507]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:39:45 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:39:45Z" level=error msg="error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Apr 08 23:39:45 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:39:45Z" level=error msg="error getting RW layer size for container ID 'b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:39:45 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:39:45Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc'"
	Apr 08 23:39:45 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:39:45Z" level=error msg="error getting RW layer size for container ID 'bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:39:45 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:39:45Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f'"
	Apr 08 23:39:45 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:39:45Z" level=error msg="error getting RW layer size for container ID 'd1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:39:45 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:39:45Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee'"
	Apr 08 23:39:45 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:39:45Z" level=error msg="error getting RW layer size for container ID 'e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:39:45 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:39:45Z" level=error msg="error getting RW layer size for container ID '48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:39:45 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:39:45Z" level=error msg="Set backoffDuration to : 1m0s for container ID '48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245'"
	Apr 08 23:39:45 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:39:45Z" level=error msg="error getting RW layer size for container ID 'a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:39:45 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:39:45Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa'"
	Apr 08 23:39:45 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:39:45Z" level=error msg="error getting RW layer size for container ID 'd4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:39:45 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:39:45Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61'"
	Apr 08 23:39:45 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:39:45 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:39:45 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:39:45Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c'"
	Apr 08 23:39:45 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2025-04-08T23:39:47Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.229826] systemd-fstab-generator[1083]: Ignoring "noauto" option for root device
	[  +2.846583] systemd-fstab-generator[1309]: Ignoring "noauto" option for root device
	[  +0.173620] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	[  +0.175758] systemd-fstab-generator[1333]: Ignoring "noauto" option for root device
	[  +0.246052] systemd-fstab-generator[1348]: Ignoring "noauto" option for root device
	[  +8.663048] systemd-fstab-generator[1449]: Ignoring "noauto" option for root device
	[  +0.103326] kauditd_printk_skb: 206 callbacks suppressed
	[  +5.045655] kauditd_printk_skb: 24 callbacks suppressed
	[  +0.759487] systemd-fstab-generator[1705]: Ignoring "noauto" option for root device
	[  +6.800944] systemd-fstab-generator[1860]: Ignoring "noauto" option for root device
	[  +0.086630] kauditd_printk_skb: 40 callbacks suppressed
	[  +8.016757] systemd-fstab-generator[2285]: Ignoring "noauto" option for root device
	[  +0.140038] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.396453] systemd-fstab-generator[2387]: Ignoring "noauto" option for root device
	[  +0.210902] kauditd_printk_skb: 12 callbacks suppressed
	[Apr 8 23:08] kauditd_printk_skb: 71 callbacks suppressed
	[Apr 8 23:09] systemd-fstab-generator[3506]: Ignoring "noauto" option for root device
	[  +0.614168] systemd-fstab-generator[3549]: Ignoring "noauto" option for root device
	[  +0.260567] systemd-fstab-generator[3561]: Ignoring "noauto" option for root device
	[  +0.277633] systemd-fstab-generator[3575]: Ignoring "noauto" option for root device
	[  +5.335755] kauditd_printk_skb: 89 callbacks suppressed
	[Apr 8 23:36] systemd-fstab-generator[10836]: Ignoring "noauto" option for root device
	[  +0.553131] systemd-fstab-generator[10872]: Ignoring "noauto" option for root device
	[  +0.187836] systemd-fstab-generator[10884]: Ignoring "noauto" option for root device
	[  +0.251836] systemd-fstab-generator[10898]: Ignoring "noauto" option for root device
	
	
	==> kernel <==
	 23:40:46 up 35 min,  0 users,  load average: 0.08, 0.02, 0.01
	Linux functional-618200 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 08 23:40:33 functional-618200 kubelet[2292]: I0408 23:40:33.986893    2292 status_manager.go:890] "Failed to get status for pod" podUID="9fb511c70f1101c6e5f88375ee4557ca" pod="kube-system/etcd-functional-618200" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-618200\": dial tcp 192.168.113.37:8441: connect: connection refused"
	Apr 08 23:40:35 functional-618200 kubelet[2292]: E0408 23:40:35.456721    2292 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 31m8.444413455s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Apr 08 23:40:38 functional-618200 kubelet[2292]: E0408 23:40:38.919842    2292 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-618200?timeout=10s\": dial tcp 192.168.113.37:8441: connect: connection refused" interval="7s"
	Apr 08 23:40:39 functional-618200 kubelet[2292]: E0408 23:40:39.317682    2292 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events/kube-scheduler-functional-618200.18347a9c843b9810\": dial tcp 192.168.113.37:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-scheduler-functional-618200.18347a9c843b9810  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-functional-618200,UID:2d86200df590720b9ed4835cb131ef10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://127.0.0.1:10259/readyz\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-618200,},FirstTimestamp:2025-04-08 23:09:28.351209488 +0000 UTC m=+94.582390377,LastTimestamp:2025
-04-08 23:09:33.353699542 +0000 UTC m=+99.584880531,Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-618200,}"
	Apr 08 23:40:40 functional-618200 kubelet[2292]: E0408 23:40:40.457449    2292 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 31m13.44514709s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Apr 08 23:40:43 functional-618200 kubelet[2292]: I0408 23:40:43.983241    2292 status_manager.go:890] "Failed to get status for pod" podUID="9fb511c70f1101c6e5f88375ee4557ca" pod="kube-system/etcd-functional-618200" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-618200\": dial tcp 192.168.113.37:8441: connect: connection refused"
	Apr 08 23:40:43 functional-618200 kubelet[2292]: I0408 23:40:43.984183    2292 status_manager.go:890] "Failed to get status for pod" podUID="195f529b1fbee47263ef9fc136a700cc" pod="kube-system/kube-apiserver-functional-618200" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-618200\": dial tcp 192.168.113.37:8441: connect: connection refused"
	Apr 08 23:40:43 functional-618200 kubelet[2292]: I0408 23:40:43.985596    2292 status_manager.go:890] "Failed to get status for pod" podUID="2d86200df590720b9ed4835cb131ef10" pod="kube-system/kube-scheduler-functional-618200" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-618200\": dial tcp 192.168.113.37:8441: connect: connection refused"
	Apr 08 23:40:45 functional-618200 kubelet[2292]: E0408 23:40:45.458897    2292 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 31m18.446597136s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Apr 08 23:40:45 functional-618200 kubelet[2292]: E0408 23:40:45.922323    2292 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-618200?timeout=10s\": dial tcp 192.168.113.37:8441: connect: connection refused" interval="7s"
	Apr 08 23:40:45 functional-618200 kubelet[2292]: E0408 23:40:45.973439    2292 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 08 23:40:45 functional-618200 kubelet[2292]: E0408 23:40:45.973486    2292 kuberuntime_container.go:508] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:40:45 functional-618200 kubelet[2292]: E0408 23:40:45.973821    2292 log.go:32] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:40:45 functional-618200 kubelet[2292]: E0408 23:40:45.973853    2292 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:40:45 functional-618200 kubelet[2292]: E0408 23:40:45.973875    2292 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 08 23:40:45 functional-618200 kubelet[2292]: E0408 23:40:45.973894    2292 container_log_manager.go:197] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:40:45 functional-618200 kubelet[2292]: E0408 23:40:45.973944    2292 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 08 23:40:45 functional-618200 kubelet[2292]: E0408 23:40:45.974047    2292 kuberuntime_sandbox.go:305] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:40:45 functional-618200 kubelet[2292]: E0408 23:40:45.974071    2292 generic.go:256] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:40:45 functional-618200 kubelet[2292]: E0408 23:40:45.974114    2292 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 08 23:40:45 functional-618200 kubelet[2292]: E0408 23:40:45.974131    2292 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:40:45 functional-618200 kubelet[2292]: I0408 23:40:45.974143    2292 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:40:45 functional-618200 kubelet[2292]: E0408 23:40:45.974797    2292 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 08 23:40:45 functional-618200 kubelet[2292]: E0408 23:40:45.974827    2292 kuberuntime_container.go:508] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Apr 08 23:40:45 functional-618200 kubelet[2292]: E0408 23:40:45.975087    2292 kubelet.go:1529] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0408 23:38:45.524171    3016 logs.go:279] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:38:45.555009    3016 logs.go:279] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:38:45.587900    3016 logs.go:279] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:38:45.621652    3016 logs.go:279] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:38:45.653521    3016 logs.go:279] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:39:45.740574    3016 logs.go:279] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:39:45.777508    3016 logs.go:279] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:39:45.810095    3016 logs.go:279] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-618200 -n functional-618200
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-618200 -n functional-618200: exit status 2 (12.0280284s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-618200" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (361.52s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (180.61s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-618200 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:827: (dbg) Non-zero exit: kubectl --context functional-618200 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (10.3779776s)

                                                
                                                
** stderr ** 
	E0408 23:41:00.747907    9044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.113.37:8441/api?timeout=32s\": dial tcp 192.168.113.37:8441: connectex: No connection could be made because the target machine actively refused it."
	E0408 23:41:02.857428    9044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.113.37:8441/api?timeout=32s\": dial tcp 192.168.113.37:8441: connectex: No connection could be made because the target machine actively refused it."
	E0408 23:41:04.882887    9044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.113.37:8441/api?timeout=32s\": dial tcp 192.168.113.37:8441: connectex: No connection could be made because the target machine actively refused it."
	E0408 23:41:06.930831    9044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.113.37:8441/api?timeout=32s\": dial tcp 192.168.113.37:8441: connectex: No connection could be made because the target machine actively refused it."
	E0408 23:41:08.963107    9044 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.113.37:8441/api?timeout=32s\": dial tcp 192.168.113.37:8441: connectex: No connection could be made because the target machine actively refused it."
	Unable to connect to the server: dial tcp 192.168.113.37:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:829: failed to get components. args "kubectl --context functional-618200 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-618200 -n functional-618200
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-618200 -n functional-618200: exit status 2 (11.7906273s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-618200 logs -n 25
E0408 23:43:10.469061    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-618200 logs -n 25: (2m26.2217067s)
helpers_test.go:252: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| unpause | nospam-268300 --log_dir                                                  | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:03 UTC | 08 Apr 25 23:03 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| unpause | nospam-268300 --log_dir                                                  | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:03 UTC | 08 Apr 25 23:03 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| unpause | nospam-268300 --log_dir                                                  | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:03 UTC | 08 Apr 25 23:03 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-268300 --log_dir                                                  | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:03 UTC | 08 Apr 25 23:04 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-268300 --log_dir                                                  | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:04 UTC | 08 Apr 25 23:04 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-268300 --log_dir                                                  | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:04 UTC | 08 Apr 25 23:04 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| delete  | -p nospam-268300                                                         | nospam-268300     | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:04 UTC | 08 Apr 25 23:04 UTC |
	| start   | -p functional-618200                                                     | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:04 UTC | 08 Apr 25 23:08 UTC |
	|         | --memory=4000                                                            |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                               |                   |                   |         |                     |                     |
	| start   | -p functional-618200                                                     | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:08 UTC |                     |
	|         | --alsologtostderr -v=8                                                   |                   |                   |         |                     |                     |
	| cache   | functional-618200 cache add                                              | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:15 UTC | 08 Apr 25 23:17 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache   | functional-618200 cache add                                              | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:17 UTC | 08 Apr 25 23:19 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |         |                     |                     |
	| cache   | functional-618200 cache add                                              | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:19 UTC | 08 Apr 25 23:21 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | functional-618200 cache add                                              | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:21 UTC | 08 Apr 25 23:22 UTC |
	|         | minikube-local-cache-test:functional-618200                              |                   |                   |         |                     |                     |
	| cache   | functional-618200 cache delete                                           | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:22 UTC | 08 Apr 25 23:22 UTC |
	|         | minikube-local-cache-test:functional-618200                              |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:22 UTC | 08 Apr 25 23:22 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |         |                     |                     |
	| cache   | list                                                                     | minikube          | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:22 UTC | 08 Apr 25 23:22 UTC |
	| ssh     | functional-618200 ssh sudo                                               | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:22 UTC |                     |
	|         | crictl images                                                            |                   |                   |         |                     |                     |
	| ssh     | functional-618200                                                        | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:22 UTC |                     |
	|         | ssh sudo docker rmi                                                      |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| ssh     | functional-618200 ssh                                                    | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:23 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | functional-618200 cache reload                                           | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:23 UTC | 08 Apr 25 23:25 UTC |
	| ssh     | functional-618200 ssh                                                    | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:25 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:25 UTC | 08 Apr 25 23:25 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:25 UTC | 08 Apr 25 23:25 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| kubectl | functional-618200 kubectl --                                             | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:28 UTC |                     |
	|         | --context functional-618200                                              |                   |                   |         |                     |                     |
	|         | get pods                                                                 |                   |                   |         |                     |                     |
	| start   | -p functional-618200                                                     | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:34 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |                   |         |                     |                     |
	|         | --wait=all                                                               |                   |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/08 23:34:57
	Running on machine: minikube6
	Binary: Built with gc go1.24.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 23:34:57.160655    4680 out.go:345] Setting OutFile to fd 1364 ...
	I0408 23:34:57.227306    4680 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 23:34:57.227306    4680 out.go:358] Setting ErrFile to fd 1372...
	I0408 23:34:57.227306    4680 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 23:34:57.246367    4680 out.go:352] Setting JSON to false
	I0408 23:34:57.249336    4680 start.go:129] hostinfo: {"hostname":"minikube6","uptime":12294,"bootTime":1744143002,"procs":175,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5679 Build 19045.5679","kernelVersion":"10.0.19045.5679 Build 19045.5679","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0408 23:34:57.250337    4680 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 23:34:57.254337    4680 out.go:177] * [functional-618200] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
	I0408 23:34:57.259337    4680 notify.go:220] Checking for updates...
	I0408 23:34:57.259337    4680 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0408 23:34:57.262420    4680 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 23:34:57.265869    4680 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0408 23:34:57.268764    4680 out.go:177]   - MINIKUBE_LOCATION=20501
	I0408 23:34:57.271844    4680 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 23:34:57.274915    4680 config.go:182] Loaded profile config "functional-618200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0408 23:34:57.275745    4680 driver.go:404] Setting default libvirt URI to qemu:///system
	I0408 23:35:02.492013    4680 out.go:177] * Using the hyperv driver based on existing profile
	I0408 23:35:02.497227    4680 start.go:297] selected driver: hyperv
	I0408 23:35:02.497227    4680 start.go:901] validating driver "hyperv" against &{Name:functional-618200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 Clust
erName:functional-618200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.113.37 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 23:35:02.497227    4680 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 23:35:02.546322    4680 start_flags.go:975] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 23:35:02.546322    4680 cni.go:84] Creating CNI manager for ""
	I0408 23:35:02.547269    4680 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 23:35:02.547269    4680 start.go:340] cluster config:
	{Name:functional-618200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-618200 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.113.37 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docke
r MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 23:35:02.547269    4680 iso.go:125] acquiring lock: {Name:mk49322cc4182124f5e9cd1631076166a7ff463d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 23:35:02.554319    4680 out.go:177] * Starting "functional-618200" primary control-plane node in "functional-618200" cluster
	I0408 23:35:02.556281    4680 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0408 23:35:02.557271    4680 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
	I0408 23:35:02.557271    4680 cache.go:56] Caching tarball of preloaded images
	I0408 23:35:02.557271    4680 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0408 23:35:02.557271    4680 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0408 23:35:02.557271    4680 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-618200\config.json ...
	I0408 23:35:02.560231    4680 start.go:360] acquireMachinesLock for functional-618200: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 23:35:02.560231    4680 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-618200"
	I0408 23:35:02.560231    4680 start.go:96] Skipping create...Using existing machine configuration
	I0408 23:35:02.560231    4680 fix.go:54] fixHost starting: 
	I0408 23:35:02.560231    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:35:05.222913    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:35:05.222913    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:05.222999    4680 fix.go:112] recreateIfNeeded on functional-618200: state=Running err=<nil>
	W0408 23:35:05.222999    4680 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 23:35:05.225946    4680 out.go:177] * Updating the running hyperv "functional-618200" VM ...
	I0408 23:35:05.230009    4680 machine.go:93] provisionDockerMachine start ...
	I0408 23:35:05.230204    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:35:07.291911    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:35:07.292084    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:07.292225    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:35:09.764896    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:35:09.764896    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:09.772026    4680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:35:09.772916    4680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:35:09.772916    4680 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 23:35:09.909726    4680 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-618200
	
	I0408 23:35:09.909912    4680 buildroot.go:166] provisioning hostname "functional-618200"
	I0408 23:35:09.909912    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:35:11.997581    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:35:11.997581    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:11.998187    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:35:14.437911    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:35:14.437911    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:14.443507    4680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:35:14.444263    4680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:35:14.444331    4680 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-618200 && echo "functional-618200" | sudo tee /etc/hostname
	I0408 23:35:14.603359    4680 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-618200
	
	I0408 23:35:14.603469    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:35:16.670523    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:35:16.671534    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:16.671557    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:35:19.147238    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:35:19.147238    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:19.153778    4680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:35:19.154064    4680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:35:19.154064    4680 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-618200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-618200/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-618200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 23:35:19.293655    4680 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 23:35:19.293818    4680 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0408 23:35:19.293818    4680 buildroot.go:174] setting up certificates
	I0408 23:35:19.293918    4680 provision.go:84] configureAuth start
	I0408 23:35:19.293918    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:35:21.418011    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:35:21.418011    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:21.418011    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:35:23.915067    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:35:23.915750    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:23.915843    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:35:26.054110    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:35:26.054110    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:26.054245    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:35:28.570897    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:35:28.570897    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:28.570979    4680 provision.go:143] copyHostCerts
	I0408 23:35:28.571441    4680 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0408 23:35:28.571441    4680 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0408 23:35:28.572091    4680 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0408 23:35:28.573882    4680 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0408 23:35:28.573882    4680 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0408 23:35:28.574303    4680 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0408 23:35:28.575503    4680 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0408 23:35:28.575503    4680 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0408 23:35:28.575803    4680 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0408 23:35:28.576584    4680 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-618200 san=[127.0.0.1 192.168.113.37 functional-618200 localhost minikube]
	I0408 23:35:28.959411    4680 provision.go:177] copyRemoteCerts
	I0408 23:35:28.968415    4680 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 23:35:28.968415    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:35:31.020048    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:35:31.020048    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:31.020798    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:35:33.462537    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:35:33.462537    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:33.462537    4680 sshutil.go:53] new ssh client: &{IP:192.168.113.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-618200\id_rsa Username:docker}
	I0408 23:35:33.576960    4680 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6084838s)
	I0408 23:35:33.577533    4680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0408 23:35:33.623672    4680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0408 23:35:33.670466    4680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 23:35:33.717400    4680 provision.go:87] duration metric: took 14.4232931s to configureAuth
	I0408 23:35:33.717400    4680 buildroot.go:189] setting minikube options for container-runtime
	I0408 23:35:33.717979    4680 config.go:182] Loaded profile config "functional-618200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0408 23:35:33.718051    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:35:35.820801    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:35:35.821878    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:35.822118    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:35:38.293353    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:35:38.293353    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:38.299330    4680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:35:38.300018    4680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:35:38.300018    4680 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0408 23:35:38.425797    4680 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0408 23:35:38.425797    4680 buildroot.go:70] root file system type: tmpfs
	I0408 23:35:38.426995    4680 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0408 23:35:38.427061    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:35:40.452569    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:35:40.452569    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:40.452796    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:35:42.927371    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:35:42.927371    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:42.934515    4680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:35:42.935261    4680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:35:42.935261    4680 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0408 23:35:43.086612    4680 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0408 23:35:43.086740    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:35:45.178050    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:35:45.178179    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:45.178179    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:35:47.646488    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:35:47.647562    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:47.653138    4680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:35:47.653919    4680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:35:47.653919    4680 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0408 23:35:47.796320    4680 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 23:35:47.796320    4680 machine.go:96] duration metric: took 42.5657539s to provisionDockerMachine
	I0408 23:35:47.796320    4680 start.go:293] postStartSetup for "functional-618200" (driver="hyperv")
	I0408 23:35:47.796508    4680 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 23:35:47.808373    4680 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 23:35:47.808373    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:35:49.907410    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:35:49.907410    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:49.907410    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:35:52.435264    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:35:52.435264    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:52.436078    4680 sshutil.go:53] new ssh client: &{IP:192.168.113.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-618200\id_rsa Username:docker}
	I0408 23:35:52.536680    4680 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7282442s)
	I0408 23:35:52.550709    4680 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 23:35:52.557305    4680 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 23:35:52.557354    4680 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0408 23:35:52.558201    4680 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0408 23:35:52.560040    4680 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> 98642.pem in /etc/ssl/certs
	I0408 23:35:52.561052    4680 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9864\hosts -> hosts in /etc/test/nested/copy/9864
	I0408 23:35:52.572449    4680 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/9864
	I0408 23:35:52.591479    4680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem --> /etc/ssl/certs/98642.pem (1708 bytes)
	I0408 23:35:52.632158    4680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9864\hosts --> /etc/test/nested/copy/9864/hosts (40 bytes)
	I0408 23:35:52.674167    4680 start.go:296] duration metric: took 4.8777819s for postStartSetup
	I0408 23:35:52.674305    4680 fix.go:56] duration metric: took 50.113417s for fixHost
	I0408 23:35:52.674384    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:35:54.767684    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:35:54.767684    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:54.767684    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:35:57.261834    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:35:57.261834    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:57.271187    4680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:35:57.271187    4680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:35:57.271187    4680 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0408 23:35:57.398373    4680 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744155357.426064067
	
	I0408 23:35:57.398373    4680 fix.go:216] guest clock: 1744155357.426064067
	I0408 23:35:57.398373    4680 fix.go:229] Guest: 2025-04-08 23:35:57.426064067 +0000 UTC Remote: 2025-04-08 23:35:52.6743059 +0000 UTC m=+55.594526801 (delta=4.751758167s)
	I0408 23:35:57.398607    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:35:59.476535    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:35:59.476535    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:59.477439    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:36:01.946547    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:36:01.946755    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:36:01.952277    4680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:36:01.952431    4680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:36:01.952431    4680 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1744155357
	I0408 23:36:02.109581    4680 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr  8 23:35:57 UTC 2025
	
	I0408 23:36:02.109581    4680 fix.go:236] clock set: Tue Apr  8 23:35:57 UTC 2025
	 (err=<nil>)
	I0408 23:36:02.109581    4680 start.go:83] releasing machines lock for "functional-618200", held for 59.5485681s
	I0408 23:36:02.110548    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:36:04.180009    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:36:04.180193    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:36:04.180261    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:36:06.679668    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:36:06.679668    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:36:06.684777    4680 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0408 23:36:06.684909    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:36:06.693996    4680 ssh_runner.go:195] Run: cat /version.json
	I0408 23:36:06.693996    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:36:08.902982    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:36:08.903217    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:36:08.903217    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:36:08.911965    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:36:08.911965    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:36:08.911965    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:36:11.559377    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:36:11.559377    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:36:11.560763    4680 sshutil.go:53] new ssh client: &{IP:192.168.113.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-618200\id_rsa Username:docker}
	I0408 23:36:11.579839    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:36:11.579839    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:36:11.580662    4680 sshutil.go:53] new ssh client: &{IP:192.168.113.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-618200\id_rsa Username:docker}
	I0408 23:36:11.653575    4680 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9686214s)
	W0408 23:36:11.653575    4680 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0408 23:36:11.671985    4680 ssh_runner.go:235] Completed: cat /version.json: (4.9779236s)
	I0408 23:36:11.686366    4680 ssh_runner.go:195] Run: systemctl --version
	I0408 23:36:11.708570    4680 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 23:36:11.717906    4680 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 23:36:11.728234    4680 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 23:36:11.747584    4680 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0408 23:36:11.747584    4680 start.go:495] detecting cgroup driver to use...
	I0408 23:36:11.747584    4680 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0408 23:36:11.768904    4680 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0408 23:36:11.768904    4680 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0408 23:36:11.797321    4680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0408 23:36:11.831085    4680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0408 23:36:11.849662    4680 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0408 23:36:11.861888    4680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0408 23:36:11.903580    4680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 23:36:11.943433    4680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0408 23:36:11.977323    4680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 23:36:12.012379    4680 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 23:36:12.046321    4680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0408 23:36:12.079535    4680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0408 23:36:12.110716    4680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0408 23:36:12.147517    4680 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 23:36:12.178928    4680 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 23:36:12.208351    4680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:36:12.410730    4680 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0408 23:36:12.439631    4680 start.go:495] detecting cgroup driver to use...
	I0408 23:36:12.451933    4680 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0408 23:36:12.488014    4680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 23:36:12.521384    4680 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 23:36:12.558160    4680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 23:36:12.599092    4680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0408 23:36:12.621759    4680 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 23:36:12.666043    4680 ssh_runner.go:195] Run: which cri-dockerd
	I0408 23:36:12.683104    4680 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0408 23:36:12.700086    4680 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0408 23:36:12.745200    4680 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0408 23:36:12.942898    4680 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0408 23:36:13.136518    4680 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0408 23:36:13.136518    4680 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0408 23:36:13.182679    4680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:36:13.412451    4680 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0408 23:37:45.325640    4680 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m31.911983s)
	I0408 23:37:45.337425    4680 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0408 23:37:45.409039    4680 out.go:201] 
	W0408 23:37:45.412124    4680 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 08 23:06:49 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.094333857Z" level=info msg="Starting up"
	Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.095749501Z" level=info msg="containerd not running, starting managed containerd"
	Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.097506580Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=673
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.128963677Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152469766Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152558876Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152717392Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152739794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152812201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152901110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153079328Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153169038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153187739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153197940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153293950Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153812303Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156561482Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156716198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156848512Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156952822Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.157044531Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.157169744Z" level=info msg="metadata content store policy set" policy=shared
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190389421Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190521734Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190544737Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190560338Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190576740Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190838067Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191154799Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191361820Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191472031Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191493633Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191512135Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191527737Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191541238Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191555639Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191571341Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191603144Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191615846Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191628447Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191749659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191774162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191800364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191815666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191830867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191844669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191857670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191870171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191882273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191897274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191908775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191920677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191932778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191947379Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191967081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191979383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191992484Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192114796Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192196605Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192262611Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192291214Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192304416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192318917Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192331918Z" level=info msg="NRI interface is disabled by configuration."
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193151202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193285015Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193371424Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193820570Z" level=info msg="containerd successfully booted in 0.066941s"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.170474987Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.203429127Z" level=info msg="Loading containers: start."
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.350665658Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.583414712Z" level=info msg="Loading containers: done."
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.608611503Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.608776419Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.609056647Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.609260067Z" level=info msg="Daemon has completed initialization"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.713909013Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.714066029Z" level=info msg="API listen on [::]:2376"
	Apr 08 23:06:50 functional-618200 systemd[1]: Started Docker Application Container Engine.
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.811241096Z" level=info msg="Processing signal 'terminated'"
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813084503Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813257403Z" level=info msg="Daemon shutdown complete"
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813288003Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813374004Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 08 23:07:20 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	Apr 08 23:07:21 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	Apr 08 23:07:21 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:07:21 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.861204748Z" level=info msg="Starting up"
	Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.863521556Z" level=info msg="containerd not running, starting managed containerd"
	Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.864856161Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1097
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.891008554Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913514335Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913559535Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913591835Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913605435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913626835Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913637435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913748735Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913963436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913985636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913996836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.914019636Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.914159537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.916995847Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917087147Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917210048Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917295148Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917328148Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917346448Z" level=info msg="metadata content store policy set" policy=shared
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917634649Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917741950Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917760750Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917900050Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917914850Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917957150Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918196151Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918327452Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918413452Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918430852Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918442352Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918453152Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918462452Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918473352Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918484552Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918499152Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918509952Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918520052Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918543853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918558553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918568953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918579553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918589553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918609253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918626253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918638253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918657853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918673253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918682953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918692253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918702953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918715553Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918733953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918744753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918754653Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918959554Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919161355Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919325455Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919361655Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919372055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919407356Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919416356Z" level=info msg="NRI interface is disabled by configuration."
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919735157Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919968758Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.920117658Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.920171758Z" level=info msg="containerd successfully booted in 0.029982s"
	Apr 08 23:07:22 functional-618200 dockerd[1091]: time="2025-04-08T23:07:22.908709690Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 08 23:07:22 functional-618200 dockerd[1091]: time="2025-04-08T23:07:22.934950284Z" level=info msg="Loading containers: start."
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.062615440Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.175164242Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.282062124Z" level=info msg="Loading containers: done."
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.305666909Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.305777709Z" level=info msg="Daemon has completed initialization"
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.341856738Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 08 23:07:23 functional-618200 systemd[1]: Started Docker Application Container Engine.
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.343491744Z" level=info msg="API listen on [::]:2376"
	Apr 08 23:07:32 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.905143108Z" level=info msg="Processing signal 'terminated'"
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906371813Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906906114Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.907033815Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906918515Z" level=info msg="Daemon shutdown complete"
	Apr 08 23:07:33 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	Apr 08 23:07:33 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:07:33 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.955484761Z" level=info msg="Starting up"
	Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.957042767Z" level=info msg="containerd not running, starting managed containerd"
	Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.958462672Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1462
	Apr 08 23:07:33 functional-618200 dockerd[1462]: time="2025-04-08T23:07:33.983507761Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009132353Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009242353Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009307753Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009324953Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009354454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009383954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009545254Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009658655Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009680555Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009691855Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009717555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.010024356Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012580665Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012671765Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012945166Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013039867Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013070567Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013104967Z" level=info msg="metadata content store policy set" policy=shared
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013460968Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013562869Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013583269Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013598369Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013611569Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013659269Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014010570Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014156471Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014247371Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014266571Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014280071Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014397172Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014425272Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014441672Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014458272Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014472772Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014498972Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014515572Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014537972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014555672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014570972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014585972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014601072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014615672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014629372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014643572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014658573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014679173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014709673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014738473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014783273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014916873Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014942274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014955574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014969174Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015051774Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015092874Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015107074Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015122374Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015133174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015147174Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015158874Z" level=info msg="NRI interface is disabled by configuration."
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015573476Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015638476Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015690176Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015715476Z" level=info msg="containerd successfully booted in 0.033079s"
	Apr 08 23:07:35 functional-618200 dockerd[1456]: time="2025-04-08T23:07:35.262471031Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 08 23:07:37 functional-618200 dockerd[1456]: time="2025-04-08T23:07:37.762713164Z" level=info msg="Loading containers: start."
	Apr 08 23:07:37 functional-618200 dockerd[1456]: time="2025-04-08T23:07:37.897446846Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.015338367Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.153824862Z" level=info msg="Loading containers: done."
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.182692065Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.182937366Z" level=info msg="Daemon has completed initialization"
	Apr 08 23:07:38 functional-618200 systemd[1]: Started Docker Application Container Engine.
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.220981402Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.221045402Z" level=info msg="API listen on [::]:2376"
	Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928174323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928255628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928274329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928976471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011163114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011256119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011273420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011437330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.047888267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048098278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048281989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048657110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089143872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089470391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089714404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.090374541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.331240402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.331940241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.332248459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.332901095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587350115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587733437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587951349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.588255466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643351545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643476652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643513354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643620460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681369670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681570881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681658686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.682028307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.094044455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.094486867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.095561595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.097530446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394114311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394433319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394665025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.395349443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643182806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643370211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643392711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.645053352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216296816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216387017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216402117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216977424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.540620784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.540963288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.541044989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.541180590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.848480641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.850292361Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.850566464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.851150170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.385762643Z" level=info msg="Processing signal 'terminated'"
	Apr 08 23:09:27 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574335274Z" level=info msg="shim disconnected" id=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574507675Z" level=warning msg="cleaning up after shim disconnected" id=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574520575Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.575374478Z" level=info msg="ignoring event" container=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.602965785Z" level=info msg="ignoring event" container=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.603895489Z" level=info msg="shim disconnected" id=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.604175090Z" level=warning msg="cleaning up after shim disconnected" id=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.604242890Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614380530Z" level=info msg="shim disconnected" id=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614605231Z" level=warning msg="cleaning up after shim disconnected" id=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614742231Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.620402053Z" level=info msg="ignoring event" container=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.620802455Z" level=info msg="shim disconnected" id=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.621015255Z" level=info msg="ignoring event" container=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.621947059Z" level=info msg="ignoring event" container=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.622304660Z" level=info msg="ignoring event" container=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622827062Z" level=warning msg="cleaning up after shim disconnected" id=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.623203064Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622314560Z" level=info msg="shim disconnected" id=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.624293868Z" level=warning msg="cleaning up after shim disconnected" id=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.624306868Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622381461Z" level=info msg="shim disconnected" id=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.631193795Z" level=warning msg="cleaning up after shim disconnected" id=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.631249695Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.667400535Z" level=info msg="ignoring event" container=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.669623644Z" level=info msg="shim disconnected" id=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.672188454Z" level=warning msg="cleaning up after shim disconnected" id=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.672924657Z" level=info msg="ignoring event" container=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.673767960Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.681394990Z" level=info msg="ignoring event" container=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.681607190Z" level=info msg="ignoring event" container=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.681903492Z" level=info msg="shim disconnected" id=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.685272405Z" level=warning msg="cleaning up after shim disconnected" id=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.685407505Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.671723952Z" level=info msg="shim disconnected" id=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.693693137Z" level=warning msg="cleaning up after shim disconnected" id=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.693789338Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697563052Z" level=info msg="shim disconnected" id=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697641053Z" level=warning msg="cleaning up after shim disconnected" id=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697654453Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.725345060Z" level=info msg="ignoring event" container=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.725697262Z" level=info msg="shim disconnected" id=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.725980963Z" level=warning msg="cleaning up after shim disconnected" id=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.726206964Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.734018694Z" level=info msg="ignoring event" container=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.736798905Z" level=info msg="shim disconnected" id=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.737017505Z" level=warning msg="cleaning up after shim disconnected" id=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.737255906Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:32 functional-618200 dockerd[1456]: time="2025-04-08T23:09:32.552363388Z" level=info msg="ignoring event" container=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556138103Z" level=info msg="shim disconnected" id=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c namespace=moby
	Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556756905Z" level=warning msg="cleaning up after shim disconnected" id=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c namespace=moby
	Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556921006Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.565876302Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.643029581Z" level=info msg="ignoring event" container=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.646699056Z" level=info msg="shim disconnected" id=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.647140153Z" level=warning msg="cleaning up after shim disconnected" id=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.647214253Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724363532Z" level=info msg="Daemon shutdown complete"
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724563130Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724658330Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724794029Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 08 23:09:38 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	Apr 08 23:09:38 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:09:38 functional-618200 systemd[1]: docker.service: Consumed 4.925s CPU time.
	Apr 08 23:09:38 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:09:38 functional-618200 dockerd[3978]: time="2025-04-08T23:09:38.782261701Z" level=info msg="Starting up"
	Apr 08 23:10:38 functional-618200 dockerd[3978]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:10:38 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:10:38 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:10:38 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:10:38 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Apr 08 23:10:38 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:10:38 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:10:38 functional-618200 dockerd[4187]: time="2025-04-08T23:10:38.990065142Z" level=info msg="Starting up"
	Apr 08 23:11:39 functional-618200 dockerd[4187]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:11:39 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:11:39 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:11:39 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:11:39 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Apr 08 23:11:39 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:11:39 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:11:39 functional-618200 dockerd[4495]: time="2025-04-08T23:11:39.240374985Z" level=info msg="Starting up"
	Apr 08 23:12:39 functional-618200 dockerd[4495]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:12:39 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:12:39 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:12:39 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:12:39 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
	Apr 08 23:12:39 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:12:39 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:12:39 functional-618200 dockerd[4717]: time="2025-04-08T23:12:39.435825366Z" level=info msg="Starting up"
	Apr 08 23:13:39 functional-618200 dockerd[4717]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:13:39 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:13:39 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:13:39 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:13:39 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 4.
	Apr 08 23:13:39 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:13:39 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:13:39 functional-618200 dockerd[4937]: time="2025-04-08T23:13:39.647599381Z" level=info msg="Starting up"
	Apr 08 23:14:39 functional-618200 dockerd[4937]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:14:39 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:14:39 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:14:39 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:14:39 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 5.
	Apr 08 23:14:39 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:14:39 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:14:39 functional-618200 dockerd[5287]: time="2025-04-08T23:14:39.994059486Z" level=info msg="Starting up"
	Apr 08 23:15:40 functional-618200 dockerd[5287]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:15:40 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:15:40 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:15:40 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:15:40 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 6.
	Apr 08 23:15:40 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:15:40 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:15:40 functional-618200 dockerd[5511]: time="2025-04-08T23:15:40.241827213Z" level=info msg="Starting up"
	Apr 08 23:16:40 functional-618200 dockerd[5511]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:16:40 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:16:40 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:16:40 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:16:40 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 7.
	Apr 08 23:16:40 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:16:40 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:16:40 functional-618200 dockerd[5774]: time="2025-04-08T23:16:40.479744325Z" level=info msg="Starting up"
	Apr 08 23:17:40 functional-618200 dockerd[5774]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:17:40 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:17:40 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:17:40 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:17:40 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 8.
	Apr 08 23:17:40 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:17:40 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:17:40 functional-618200 dockerd[6010]: time="2025-04-08T23:17:40.734060234Z" level=info msg="Starting up"
	Apr 08 23:18:40 functional-618200 dockerd[6010]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:18:40 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:18:40 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:18:40 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:18:40 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 9.
	Apr 08 23:18:40 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:18:40 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:18:40 functional-618200 dockerd[6233]: time="2025-04-08T23:18:40.980938832Z" level=info msg="Starting up"
	Apr 08 23:19:41 functional-618200 dockerd[6233]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:19:41 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:19:41 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:19:41 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:19:41 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 10.
	Apr 08 23:19:41 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:19:41 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:19:41 functional-618200 dockerd[6451]: time="2025-04-08T23:19:41.243144928Z" level=info msg="Starting up"
	Apr 08 23:20:41 functional-618200 dockerd[6451]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:20:41 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:20:41 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:20:41 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:20:41 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 11.
	Apr 08 23:20:41 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:20:41 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:20:41 functional-618200 dockerd[6677]: time="2025-04-08T23:20:41.482548376Z" level=info msg="Starting up"
	Apr 08 23:21:41 functional-618200 dockerd[6677]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:21:41 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:21:41 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:21:41 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:21:41 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 12.
	Apr 08 23:21:41 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:21:41 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:21:41 functional-618200 dockerd[6897]: time="2025-04-08T23:21:41.739358273Z" level=info msg="Starting up"
	Apr 08 23:22:41 functional-618200 dockerd[6897]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:22:41 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:22:41 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:22:41 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:22:41 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 13.
	Apr 08 23:22:41 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:22:41 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:22:41 functional-618200 dockerd[7137]: time="2025-04-08T23:22:41.989317104Z" level=info msg="Starting up"
	Apr 08 23:23:42 functional-618200 dockerd[7137]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:23:42 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:23:42 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:23:42 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:23:42 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 14.
	Apr 08 23:23:42 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:23:42 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:23:42 functional-618200 dockerd[7388]: time="2025-04-08T23:23:42.246986404Z" level=info msg="Starting up"
	Apr 08 23:24:42 functional-618200 dockerd[7388]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:24:42 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:24:42 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:24:42 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:24:42 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 15.
	Apr 08 23:24:42 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:24:42 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:24:42 functional-618200 dockerd[7634]: time="2025-04-08T23:24:42.498712284Z" level=info msg="Starting up"
	Apr 08 23:25:42 functional-618200 dockerd[7634]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:25:42 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:25:42 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:25:42 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:25:42 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 16.
	Apr 08 23:25:42 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:25:42 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:25:42 functional-618200 dockerd[7865]: time="2025-04-08T23:25:42.733372335Z" level=info msg="Starting up"
	Apr 08 23:26:42 functional-618200 dockerd[7865]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:26:42 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:26:42 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:26:42 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:26:42 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 17.
	Apr 08 23:26:42 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:26:42 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:26:42 functional-618200 dockerd[8184]: time="2025-04-08T23:26:42.990759238Z" level=info msg="Starting up"
	Apr 08 23:27:43 functional-618200 dockerd[8184]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:27:43 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:27:43 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:27:43 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:27:43 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 18.
	Apr 08 23:27:43 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:27:43 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:27:43 functional-618200 dockerd[8413]: time="2025-04-08T23:27:43.200403383Z" level=info msg="Starting up"
	Apr 08 23:28:43 functional-618200 dockerd[8413]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:28:43 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:28:43 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:28:43 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:28:43 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 19.
	Apr 08 23:28:43 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:28:43 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:28:43 functional-618200 dockerd[8626]: time="2025-04-08T23:28:43.448813456Z" level=info msg="Starting up"
	Apr 08 23:29:43 functional-618200 dockerd[8626]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:29:43 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:29:43 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:29:43 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:29:43 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 20.
	Apr 08 23:29:43 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:29:43 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:29:43 functional-618200 dockerd[8971]: time="2025-04-08T23:29:43.729262267Z" level=info msg="Starting up"
	Apr 08 23:30:43 functional-618200 dockerd[8971]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:30:43 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:30:43 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:30:43 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:30:43 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 21.
	Apr 08 23:30:43 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:30:43 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:30:43 functional-618200 dockerd[9191]: time="2025-04-08T23:30:43.933489137Z" level=info msg="Starting up"
	Apr 08 23:31:43 functional-618200 dockerd[9191]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:31:43 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:31:43 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:31:43 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:31:44 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 22.
	Apr 08 23:31:44 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:31:44 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:31:44 functional-618200 dockerd[9408]: time="2025-04-08T23:31:44.168816618Z" level=info msg="Starting up"
	Apr 08 23:32:44 functional-618200 dockerd[9408]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:32:44 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:32:44 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:32:44 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:32:44 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 23.
	Apr 08 23:32:44 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:32:44 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:32:44 functional-618200 dockerd[9759]: time="2025-04-08T23:32:44.477366695Z" level=info msg="Starting up"
	Apr 08 23:33:44 functional-618200 dockerd[9759]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:33:44 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:33:44 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:33:44 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:33:44 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 24.
	Apr 08 23:33:44 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:33:44 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:33:44 functional-618200 dockerd[9976]: time="2025-04-08T23:33:44.668897222Z" level=info msg="Starting up"
	Apr 08 23:34:44 functional-618200 dockerd[9976]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:34:44 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:34:44 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:34:44 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:34:44 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 25.
	Apr 08 23:34:44 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:34:44 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:34:44 functional-618200 dockerd[10189]: time="2025-04-08T23:34:44.897317954Z" level=info msg="Starting up"
	Apr 08 23:35:44 functional-618200 dockerd[10189]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:35:44 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:35:44 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:35:44 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:35:45 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 26.
	Apr 08 23:35:45 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:35:45 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:35:45 functional-618200 dockerd[10580]: time="2025-04-08T23:35:45.235219924Z" level=info msg="Starting up"
	Apr 08 23:36:13 functional-618200 dockerd[10580]: time="2025-04-08T23:36:13.466116044Z" level=info msg="Processing signal 'terminated'"
	Apr 08 23:36:45 functional-618200 dockerd[10580]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:36:45 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:36:45 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:36:45 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:36:45 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:36:45 functional-618200 dockerd[11011]: time="2025-04-08T23:36:45.327202140Z" level=info msg="Starting up"
	Apr 08 23:37:45 functional-618200 dockerd[11011]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:37:45 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:37:45 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:37:45 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0408 23:37:45.413293    4680 out.go:270] * 
	W0408 23:37:45.414464    4680 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 23:37:45.421072    4680 out.go:201] 
	
	
	==> Docker <==
	Apr 08 23:41:46 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:41:46Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c'"
	Apr 08 23:41:46 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 5.
	Apr 08 23:41:46 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:41:46 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:41:46 functional-618200 dockerd[12310]: time="2025-04-08T23:41:46.486271309Z" level=info msg="Starting up"
	Apr 08 23:42:46 functional-618200 dockerd[12310]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:42:46 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:42:46Z" level=error msg="error getting RW layer size for container ID 'd1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:42:46 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:42:46Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee'"
	Apr 08 23:42:46 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:42:46Z" level=error msg="error getting RW layer size for container ID '48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:42:46 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:42:46Z" level=error msg="Set backoffDuration to : 1m0s for container ID '48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245'"
	Apr 08 23:42:46 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:42:46Z" level=error msg="error getting RW layer size for container ID 'e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:42:46 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:42:46Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c'"
	Apr 08 23:42:46 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:42:46Z" level=error msg="error getting RW layer size for container ID 'd4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:42:46 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:42:46 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:42:46Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'd4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61'"
	Apr 08 23:42:46 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:42:46Z" level=error msg="error getting RW layer size for container ID 'a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:42:46 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:42:46Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa'"
	Apr 08 23:42:46 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:42:46Z" level=error msg="error getting RW layer size for container ID 'bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:42:46 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:42:46Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f'"
	Apr 08 23:42:46 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:42:46Z" level=error msg="Unable to get docker version: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:42:46 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:42:46Z" level=error msg="error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Apr 08 23:42:46 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:42:46 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:42:46Z" level=error msg="error getting RW layer size for container ID 'b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc': error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:42:46 functional-618200 cri-dockerd[1356]: time="2025-04-08T23:42:46Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc'"
	Apr 08 23:42:46 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2025-04-08T23:42:48Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.229826] systemd-fstab-generator[1083]: Ignoring "noauto" option for root device
	[  +2.846583] systemd-fstab-generator[1309]: Ignoring "noauto" option for root device
	[  +0.173620] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	[  +0.175758] systemd-fstab-generator[1333]: Ignoring "noauto" option for root device
	[  +0.246052] systemd-fstab-generator[1348]: Ignoring "noauto" option for root device
	[  +8.663048] systemd-fstab-generator[1449]: Ignoring "noauto" option for root device
	[  +0.103326] kauditd_printk_skb: 206 callbacks suppressed
	[  +5.045655] kauditd_printk_skb: 24 callbacks suppressed
	[  +0.759487] systemd-fstab-generator[1705]: Ignoring "noauto" option for root device
	[  +6.800944] systemd-fstab-generator[1860]: Ignoring "noauto" option for root device
	[  +0.086630] kauditd_printk_skb: 40 callbacks suppressed
	[  +8.016757] systemd-fstab-generator[2285]: Ignoring "noauto" option for root device
	[  +0.140038] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.396453] systemd-fstab-generator[2387]: Ignoring "noauto" option for root device
	[  +0.210902] kauditd_printk_skb: 12 callbacks suppressed
	[Apr 8 23:08] kauditd_printk_skb: 71 callbacks suppressed
	[Apr 8 23:09] systemd-fstab-generator[3506]: Ignoring "noauto" option for root device
	[  +0.614168] systemd-fstab-generator[3549]: Ignoring "noauto" option for root device
	[  +0.260567] systemd-fstab-generator[3561]: Ignoring "noauto" option for root device
	[  +0.277633] systemd-fstab-generator[3575]: Ignoring "noauto" option for root device
	[  +5.335755] kauditd_printk_skb: 89 callbacks suppressed
	[Apr 8 23:36] systemd-fstab-generator[10836]: Ignoring "noauto" option for root device
	[  +0.553131] systemd-fstab-generator[10872]: Ignoring "noauto" option for root device
	[  +0.187836] systemd-fstab-generator[10884]: Ignoring "noauto" option for root device
	[  +0.251836] systemd-fstab-generator[10898]: Ignoring "noauto" option for root device
	
	
	==> kernel <==
	 23:43:46 up 38 min,  0 users,  load average: 0.08, 0.03, 0.00
	Linux functional-618200 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 08 23:43:40 functional-618200 kubelet[2292]: E0408 23:43:40.982453    2292 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-618200?timeout=10s\": dial tcp 192.168.113.37:8441: connect: connection refused" interval="7s"
	Apr 08 23:43:43 functional-618200 kubelet[2292]: I0408 23:43:43.984435    2292 status_manager.go:890] "Failed to get status for pod" podUID="2d86200df590720b9ed4835cb131ef10" pod="kube-system/kube-scheduler-functional-618200" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-618200\": dial tcp 192.168.113.37:8441: connect: connection refused"
	Apr 08 23:43:43 functional-618200 kubelet[2292]: I0408 23:43:43.985436    2292 status_manager.go:890] "Failed to get status for pod" podUID="9fb511c70f1101c6e5f88375ee4557ca" pod="kube-system/etcd-functional-618200" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-618200\": dial tcp 192.168.113.37:8441: connect: connection refused"
	Apr 08 23:43:43 functional-618200 kubelet[2292]: I0408 23:43:43.986608    2292 status_manager.go:890] "Failed to get status for pod" podUID="195f529b1fbee47263ef9fc136a700cc" pod="kube-system/kube-apiserver-functional-618200" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-618200\": dial tcp 192.168.113.37:8441: connect: connection refused"
	Apr 08 23:43:43 functional-618200 kubelet[2292]: E0408 23:43:43.990644    2292 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events/kube-scheduler-functional-618200.18347a9c843b9810\": dial tcp 192.168.113.37:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-scheduler-functional-618200.18347a9c843b9810  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-functional-618200,UID:2d86200df590720b9ed4835cb131ef10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://127.0.0.1:10259/readyz\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-618200,},FirstTimestamp:2025-04-08 23:09:28.351209488 +0000 UTC m=+94.582390377,LastTimestamp:2025
-04-08 23:09:34.354661848 +0000 UTC m=+100.585842837,Count:8,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-618200,}"
	Apr 08 23:43:45 functional-618200 kubelet[2292]: E0408 23:43:45.489937    2292 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 34m18.477613843s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @->/run/docker.sock: read: connection reset by peer]"
	Apr 08 23:43:46 functional-618200 kubelet[2292]: E0408 23:43:46.761452    2292 log.go:32] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:43:46 functional-618200 kubelet[2292]: E0408 23:43:46.761495    2292 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:43:46 functional-618200 kubelet[2292]: E0408 23:43:46.761917    2292 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 08 23:43:46 functional-618200 kubelet[2292]: E0408 23:43:46.761997    2292 kuberuntime_sandbox.go:305] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:43:46 functional-618200 kubelet[2292]: E0408 23:43:46.762031    2292 generic.go:256] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dpodsandbox%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:43:46 functional-618200 kubelet[2292]: E0408 23:43:46.762057    2292 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 08 23:43:46 functional-618200 kubelet[2292]: E0408 23:43:46.762092    2292 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:43:46 functional-618200 kubelet[2292]: I0408 23:43:46.762105    2292 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:43:46 functional-618200 kubelet[2292]: E0408 23:43:46.762144    2292 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Apr 08 23:43:46 functional-618200 kubelet[2292]: E0408 23:43:46.762156    2292 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:43:46 functional-618200 kubelet[2292]: I0408 23:43:46.762165    2292 image_gc_manager.go:214] "Failed to monitor images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:43:46 functional-618200 kubelet[2292]: E0408 23:43:46.762274    2292 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 08 23:43:46 functional-618200 kubelet[2292]: E0408 23:43:46.762298    2292 kuberuntime_container.go:508] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:43:46 functional-618200 kubelet[2292]: E0408 23:43:46.762395    2292 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 08 23:43:46 functional-618200 kubelet[2292]: E0408 23:43:46.762419    2292 container_log_manager.go:197] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Apr 08 23:43:46 functional-618200 kubelet[2292]: E0408 23:43:46.763140    2292 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Apr 08 23:43:46 functional-618200 kubelet[2292]: E0408 23:43:46.763194    2292 kuberuntime_container.go:508] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Apr 08 23:43:46 functional-618200 kubelet[2292]: E0408 23:43:46.763469    2292 kubelet.go:1529] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/containers/json?all=1&filters=%7B%22label%22%3A%7B%22io.kubernetes.docker.type%3Dcontainer%22%3Atrue%7D%7D\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	Apr 08 23:43:46 functional-618200 kubelet[2292]: E0408 23:43:46.900121    2292 kubelet.go:3018] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.43/version\": read unix @->/run/docker.sock: read: connection reset by peer"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0408 23:41:46.198066    4516 logs.go:279] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:41:46.232751    4516 logs.go:279] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:41:46.265579    4516 logs.go:279] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:41:46.300987    4516 logs.go:279] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:41:46.335585    4516 logs.go:279] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:41:46.367741    4516 logs.go:279] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:41:46.400607    4516 logs.go:279] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:42:46.486497    4516 logs.go:279] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-618200 -n functional-618200
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-618200 -n functional-618200: exit status 2 (11.8904882s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-618200" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (180.61s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (51.35s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-618200 logs
functional_test.go:1253: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-618200 logs: exit status 1 (50.5973974s)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| delete  | -p binary-mirror-831900                                                                     | binary-mirror-831900 | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:45 UTC | 08 Apr 25 22:45 UTC |
	| addons  | disable dashboard -p                                                                        | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:45 UTC |                     |
	|         | addons-582000                                                                               |                      |                   |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:45 UTC |                     |
	|         | addons-582000                                                                               |                      |                   |         |                     |                     |
	| start   | -p addons-582000 --wait=true                                                                | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:45 UTC | 08 Apr 25 22:53 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |                   |         |                     |                     |
	|         | --addons=registry                                                                           |                      |                   |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |                   |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |                   |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |                   |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |                   |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |                   |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |                   |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |                   |         |                     |                     |
	|         | --driver=hyperv --addons=ingress                                                            |                      |                   |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |                   |         |                     |                     |
	| addons  | addons-582000 addons disable                                                                | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:54 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |                   |         |                     |                     |
	| addons  | addons-582000 addons disable                                                                | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:54 UTC | 08 Apr 25 22:54 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:54 UTC | 08 Apr 25 22:55 UTC |
	|         | -p addons-582000                                                                            |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | addons-582000 addons                                                                        | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:55 UTC | 08 Apr 25 22:55 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| ssh     | addons-582000 ssh cat                                                                       | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:55 UTC | 08 Apr 25 22:55 UTC |
	|         | /opt/local-path-provisioner/pvc-b0575234-bc82-4444-9a94-3c199462b7f7_default_test-pvc/file1 |                      |                   |         |                     |                     |
	| ip      | addons-582000 ip                                                                            | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:55 UTC | 08 Apr 25 22:55 UTC |
	| addons  | addons-582000 addons disable                                                                | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:55 UTC | 08 Apr 25 22:55 UTC |
	|         | registry --alsologtostderr                                                                  |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-582000 addons disable                                                                | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:55 UTC | 08 Apr 25 22:56 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | addons-582000 addons                                                                        | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:55 UTC | 08 Apr 25 22:55 UTC |
	|         | disable cloud-spanner                                                                       |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | addons-582000 addons disable                                                                | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:55 UTC | 08 Apr 25 22:55 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-582000 addons                                                                        | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:55 UTC | 08 Apr 25 22:55 UTC |
	|         | disable metrics-server                                                                      |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | addons-582000 addons                                                                        | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:55 UTC | 08 Apr 25 22:56 UTC |
	|         | disable inspektor-gadget                                                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| ssh     | addons-582000 ssh curl -s                                                                   | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:56 UTC | 08 Apr 25 22:56 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |                   |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |                   |         |                     |                     |
	| addons  | addons-582000 addons disable                                                                | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:56 UTC | 08 Apr 25 22:56 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |                   |         |                     |                     |
	| addons  | addons-582000 addons                                                                        | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:56 UTC | 08 Apr 25 22:56 UTC |
	|         | disable volumesnapshots                                                                     |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| ip      | addons-582000 ip                                                                            | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:56 UTC | 08 Apr 25 22:56 UTC |
	| addons  | addons-582000 addons disable                                                                | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:56 UTC | 08 Apr 25 22:56 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-582000 addons                                                                        | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:56 UTC | 08 Apr 25 22:56 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | addons-582000 addons disable                                                                | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:56 UTC | 08 Apr 25 22:57 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |                   |         |                     |                     |
	| stop    | -p addons-582000                                                                            | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:57 UTC | 08 Apr 25 22:57 UTC |
	| addons  | enable dashboard -p                                                                         | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:57 UTC | 08 Apr 25 22:57 UTC |
	|         | addons-582000                                                                               |                      |                   |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:57 UTC | 08 Apr 25 22:57 UTC |
	|         | addons-582000                                                                               |                      |                   |         |                     |                     |
	| addons  | disable gvisor -p                                                                           | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:57 UTC | 08 Apr 25 22:57 UTC |
	|         | addons-582000                                                                               |                      |                   |         |                     |                     |
	| delete  | -p addons-582000                                                                            | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:57 UTC | 08 Apr 25 22:58 UTC |
	| start   | -p nospam-268300 -n=1 --memory=2250 --wait=false                                            | nospam-268300        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:58 UTC | 08 Apr 25 23:01 UTC |
	|         | --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300                       |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| start   | nospam-268300 --log_dir                                                                     | nospam-268300        | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:01 UTC |                     |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300                                 |                      |                   |         |                     |                     |
	|         | start --dry-run                                                                             |                      |                   |         |                     |                     |
	| start   | nospam-268300 --log_dir                                                                     | nospam-268300        | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:01 UTC |                     |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300                                 |                      |                   |         |                     |                     |
	|         | start --dry-run                                                                             |                      |                   |         |                     |                     |
	| start   | nospam-268300 --log_dir                                                                     | nospam-268300        | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:02 UTC |                     |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300                                 |                      |                   |         |                     |                     |
	|         | start --dry-run                                                                             |                      |                   |         |                     |                     |
	| pause   | nospam-268300 --log_dir                                                                     | nospam-268300        | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:02 UTC | 08 Apr 25 23:02 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300                                 |                      |                   |         |                     |                     |
	|         | pause                                                                                       |                      |                   |         |                     |                     |
	| pause   | nospam-268300 --log_dir                                                                     | nospam-268300        | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:02 UTC | 08 Apr 25 23:02 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300                                 |                      |                   |         |                     |                     |
	|         | pause                                                                                       |                      |                   |         |                     |                     |
	| pause   | nospam-268300 --log_dir                                                                     | nospam-268300        | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:02 UTC | 08 Apr 25 23:03 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300                                 |                      |                   |         |                     |                     |
	|         | pause                                                                                       |                      |                   |         |                     |                     |
	| unpause | nospam-268300 --log_dir                                                                     | nospam-268300        | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:03 UTC | 08 Apr 25 23:03 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300                                 |                      |                   |         |                     |                     |
	|         | unpause                                                                                     |                      |                   |         |                     |                     |
	| unpause | nospam-268300 --log_dir                                                                     | nospam-268300        | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:03 UTC | 08 Apr 25 23:03 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300                                 |                      |                   |         |                     |                     |
	|         | unpause                                                                                     |                      |                   |         |                     |                     |
	| unpause | nospam-268300 --log_dir                                                                     | nospam-268300        | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:03 UTC | 08 Apr 25 23:03 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300                                 |                      |                   |         |                     |                     |
	|         | unpause                                                                                     |                      |                   |         |                     |                     |
	| stop    | nospam-268300 --log_dir                                                                     | nospam-268300        | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:03 UTC | 08 Apr 25 23:04 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300                                 |                      |                   |         |                     |                     |
	|         | stop                                                                                        |                      |                   |         |                     |                     |
	| stop    | nospam-268300 --log_dir                                                                     | nospam-268300        | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:04 UTC | 08 Apr 25 23:04 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300                                 |                      |                   |         |                     |                     |
	|         | stop                                                                                        |                      |                   |         |                     |                     |
	| stop    | nospam-268300 --log_dir                                                                     | nospam-268300        | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:04 UTC | 08 Apr 25 23:04 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300                                 |                      |                   |         |                     |                     |
	|         | stop                                                                                        |                      |                   |         |                     |                     |
	| delete  | -p nospam-268300                                                                            | nospam-268300        | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:04 UTC | 08 Apr 25 23:04 UTC |
	| start   | -p functional-618200                                                                        | functional-618200    | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:04 UTC | 08 Apr 25 23:08 UTC |
	|         | --memory=4000                                                                               |                      |                   |         |                     |                     |
	|         | --apiserver-port=8441                                                                       |                      |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                                                  |                      |                   |         |                     |                     |
	| start   | -p functional-618200                                                                        | functional-618200    | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:08 UTC |                     |
	|         | --alsologtostderr -v=8                                                                      |                      |                   |         |                     |                     |
	| cache   | functional-618200 cache add                                                                 | functional-618200    | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:15 UTC | 08 Apr 25 23:17 UTC |
	|         | registry.k8s.io/pause:3.1                                                                   |                      |                   |         |                     |                     |
	| cache   | functional-618200 cache add                                                                 | functional-618200    | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:17 UTC | 08 Apr 25 23:19 UTC |
	|         | registry.k8s.io/pause:3.3                                                                   |                      |                   |         |                     |                     |
	| cache   | functional-618200 cache add                                                                 | functional-618200    | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:19 UTC | 08 Apr 25 23:21 UTC |
	|         | registry.k8s.io/pause:latest                                                                |                      |                   |         |                     |                     |
	| cache   | functional-618200 cache add                                                                 | functional-618200    | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:21 UTC | 08 Apr 25 23:22 UTC |
	|         | minikube-local-cache-test:functional-618200                                                 |                      |                   |         |                     |                     |
	| cache   | functional-618200 cache delete                                                              | functional-618200    | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:22 UTC | 08 Apr 25 23:22 UTC |
	|         | minikube-local-cache-test:functional-618200                                                 |                      |                   |         |                     |                     |
	| cache   | delete                                                                                      | minikube             | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:22 UTC | 08 Apr 25 23:22 UTC |
	|         | registry.k8s.io/pause:3.3                                                                   |                      |                   |         |                     |                     |
	| cache   | list                                                                                        | minikube             | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:22 UTC | 08 Apr 25 23:22 UTC |
	| ssh     | functional-618200 ssh sudo                                                                  | functional-618200    | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:22 UTC |                     |
	|         | crictl images                                                                               |                      |                   |         |                     |                     |
	| ssh     | functional-618200                                                                           | functional-618200    | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:22 UTC |                     |
	|         | ssh sudo docker rmi                                                                         |                      |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                                                |                      |                   |         |                     |                     |
	| ssh     | functional-618200 ssh                                                                       | functional-618200    | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:23 UTC |                     |
	|         | sudo crictl inspecti                                                                        |                      |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                                                |                      |                   |         |                     |                     |
	| cache   | functional-618200 cache reload                                                              | functional-618200    | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:23 UTC | 08 Apr 25 23:25 UTC |
	| ssh     | functional-618200 ssh                                                                       | functional-618200    | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:25 UTC |                     |
	|         | sudo crictl inspecti                                                                        |                      |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                                                |                      |                   |         |                     |                     |
	| cache   | delete                                                                                      | minikube             | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:25 UTC | 08 Apr 25 23:25 UTC |
	|         | registry.k8s.io/pause:3.1                                                                   |                      |                   |         |                     |                     |
	| cache   | delete                                                                                      | minikube             | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:25 UTC | 08 Apr 25 23:25 UTC |
	|         | registry.k8s.io/pause:latest                                                                |                      |                   |         |                     |                     |
	| kubectl | functional-618200 kubectl --                                                                | functional-618200    | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:28 UTC |                     |
	|         | --context functional-618200                                                                 |                      |                   |         |                     |                     |
	|         | get pods                                                                                    |                      |                   |         |                     |                     |
	| start   | -p functional-618200                                                                        | functional-618200    | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:34 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision                    |                      |                   |         |                     |                     |
	|         | --wait=all                                                                                  |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/08 23:34:57
	Running on machine: minikube6
	Binary: Built with gc go1.24.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 23:34:57.160655    4680 out.go:345] Setting OutFile to fd 1364 ...
	I0408 23:34:57.227306    4680 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 23:34:57.227306    4680 out.go:358] Setting ErrFile to fd 1372...
	I0408 23:34:57.227306    4680 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 23:34:57.246367    4680 out.go:352] Setting JSON to false
	I0408 23:34:57.249336    4680 start.go:129] hostinfo: {"hostname":"minikube6","uptime":12294,"bootTime":1744143002,"procs":175,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5679 Build 19045.5679","kernelVersion":"10.0.19045.5679 Build 19045.5679","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0408 23:34:57.250337    4680 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 23:34:57.254337    4680 out.go:177] * [functional-618200] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
	I0408 23:34:57.259337    4680 notify.go:220] Checking for updates...
	I0408 23:34:57.259337    4680 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0408 23:34:57.262420    4680 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 23:34:57.265869    4680 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0408 23:34:57.268764    4680 out.go:177]   - MINIKUBE_LOCATION=20501
	I0408 23:34:57.271844    4680 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 23:34:57.274915    4680 config.go:182] Loaded profile config "functional-618200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0408 23:34:57.275745    4680 driver.go:404] Setting default libvirt URI to qemu:///system
	I0408 23:35:02.492013    4680 out.go:177] * Using the hyperv driver based on existing profile
	I0408 23:35:02.497227    4680 start.go:297] selected driver: hyperv
	I0408 23:35:02.497227    4680 start.go:901] validating driver "hyperv" against &{Name:functional-618200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 Clust
erName:functional-618200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.113.37 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 23:35:02.497227    4680 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 23:35:02.546322    4680 start_flags.go:975] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 23:35:02.546322    4680 cni.go:84] Creating CNI manager for ""
	I0408 23:35:02.547269    4680 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 23:35:02.547269    4680 start.go:340] cluster config:
	{Name:functional-618200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-618200 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.113.37 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docke
r MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 23:35:02.547269    4680 iso.go:125] acquiring lock: {Name:mk49322cc4182124f5e9cd1631076166a7ff463d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 23:35:02.554319    4680 out.go:177] * Starting "functional-618200" primary control-plane node in "functional-618200" cluster
	I0408 23:35:02.556281    4680 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0408 23:35:02.557271    4680 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
	I0408 23:35:02.557271    4680 cache.go:56] Caching tarball of preloaded images
	I0408 23:35:02.557271    4680 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0408 23:35:02.557271    4680 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0408 23:35:02.557271    4680 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-618200\config.json ...
	I0408 23:35:02.560231    4680 start.go:360] acquireMachinesLock for functional-618200: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 23:35:02.560231    4680 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-618200"
	I0408 23:35:02.560231    4680 start.go:96] Skipping create...Using existing machine configuration
	I0408 23:35:02.560231    4680 fix.go:54] fixHost starting: 
	I0408 23:35:02.560231    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:35:05.222913    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:35:05.222913    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:05.222999    4680 fix.go:112] recreateIfNeeded on functional-618200: state=Running err=<nil>
	W0408 23:35:05.222999    4680 fix.go:138] unexpected machine state, will restart: <nil>
	I0408 23:35:05.225946    4680 out.go:177] * Updating the running hyperv "functional-618200" VM ...
	I0408 23:35:05.230009    4680 machine.go:93] provisionDockerMachine start ...
	I0408 23:35:05.230204    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:35:07.291911    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:35:07.292084    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:07.292225    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:35:09.764896    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:35:09.764896    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:09.772026    4680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:35:09.772916    4680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:35:09.772916    4680 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 23:35:09.909726    4680 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-618200
	
	I0408 23:35:09.909912    4680 buildroot.go:166] provisioning hostname "functional-618200"
	I0408 23:35:09.909912    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:35:11.997581    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:35:11.997581    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:11.998187    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:35:14.437911    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:35:14.437911    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:14.443507    4680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:35:14.444263    4680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:35:14.444331    4680 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-618200 && echo "functional-618200" | sudo tee /etc/hostname
	I0408 23:35:14.603359    4680 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-618200
	
	I0408 23:35:14.603469    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:35:16.670523    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:35:16.671534    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:16.671557    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:35:19.147238    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:35:19.147238    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:19.153778    4680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:35:19.154064    4680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:35:19.154064    4680 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-618200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-618200/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-618200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 23:35:19.293655    4680 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 23:35:19.293818    4680 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0408 23:35:19.293818    4680 buildroot.go:174] setting up certificates
	I0408 23:35:19.293918    4680 provision.go:84] configureAuth start
	I0408 23:35:19.293918    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:35:21.418011    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:35:21.418011    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:21.418011    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:35:23.915067    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:35:23.915750    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:23.915843    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:35:26.054110    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:35:26.054110    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:26.054245    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:35:28.570897    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:35:28.570897    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:28.570979    4680 provision.go:143] copyHostCerts
	I0408 23:35:28.571441    4680 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0408 23:35:28.571441    4680 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0408 23:35:28.572091    4680 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0408 23:35:28.573882    4680 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0408 23:35:28.573882    4680 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0408 23:35:28.574303    4680 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0408 23:35:28.575503    4680 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0408 23:35:28.575503    4680 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0408 23:35:28.575803    4680 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0408 23:35:28.576584    4680 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-618200 san=[127.0.0.1 192.168.113.37 functional-618200 localhost minikube]
	I0408 23:35:28.959411    4680 provision.go:177] copyRemoteCerts
	I0408 23:35:28.968415    4680 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 23:35:28.968415    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:35:31.020048    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:35:31.020048    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:31.020798    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:35:33.462537    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:35:33.462537    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:33.462537    4680 sshutil.go:53] new ssh client: &{IP:192.168.113.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-618200\id_rsa Username:docker}
	I0408 23:35:33.576960    4680 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6084838s)
	I0408 23:35:33.577533    4680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0408 23:35:33.623672    4680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0408 23:35:33.670466    4680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 23:35:33.717400    4680 provision.go:87] duration metric: took 14.4232931s to configureAuth
	I0408 23:35:33.717400    4680 buildroot.go:189] setting minikube options for container-runtime
	I0408 23:35:33.717979    4680 config.go:182] Loaded profile config "functional-618200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0408 23:35:33.718051    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:35:35.820801    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:35:35.821878    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:35.822118    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:35:38.293353    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:35:38.293353    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:38.299330    4680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:35:38.300018    4680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:35:38.300018    4680 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0408 23:35:38.425797    4680 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0408 23:35:38.425797    4680 buildroot.go:70] root file system type: tmpfs
	I0408 23:35:38.426995    4680 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0408 23:35:38.427061    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:35:40.452569    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:35:40.452569    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:40.452796    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:35:42.927371    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:35:42.927371    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:42.934515    4680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:35:42.935261    4680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:35:42.935261    4680 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0408 23:35:43.086612    4680 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0408 23:35:43.086740    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:35:45.178050    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:35:45.178179    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:45.178179    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:35:47.646488    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:35:47.647562    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:47.653138    4680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:35:47.653919    4680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:35:47.653919    4680 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0408 23:35:47.796320    4680 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 23:35:47.796320    4680 machine.go:96] duration metric: took 42.5657539s to provisionDockerMachine
	I0408 23:35:47.796320    4680 start.go:293] postStartSetup for "functional-618200" (driver="hyperv")
	I0408 23:35:47.796508    4680 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 23:35:47.808373    4680 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 23:35:47.808373    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:35:49.907410    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:35:49.907410    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:49.907410    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:35:52.435264    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:35:52.435264    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:52.436078    4680 sshutil.go:53] new ssh client: &{IP:192.168.113.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-618200\id_rsa Username:docker}
	I0408 23:35:52.536680    4680 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7282442s)
	I0408 23:35:52.550709    4680 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 23:35:52.557305    4680 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 23:35:52.557354    4680 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0408 23:35:52.558201    4680 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0408 23:35:52.560040    4680 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> 98642.pem in /etc/ssl/certs
	I0408 23:35:52.561052    4680 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9864\hosts -> hosts in /etc/test/nested/copy/9864
	I0408 23:35:52.572449    4680 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/9864
	I0408 23:35:52.591479    4680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem --> /etc/ssl/certs/98642.pem (1708 bytes)
	I0408 23:35:52.632158    4680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9864\hosts --> /etc/test/nested/copy/9864/hosts (40 bytes)
	I0408 23:35:52.674167    4680 start.go:296] duration metric: took 4.8777819s for postStartSetup
	I0408 23:35:52.674305    4680 fix.go:56] duration metric: took 50.113417s for fixHost
	I0408 23:35:52.674384    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:35:54.767684    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:35:54.767684    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:54.767684    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:35:57.261834    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:35:57.261834    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:57.271187    4680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:35:57.271187    4680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:35:57.271187    4680 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0408 23:35:57.398373    4680 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744155357.426064067
	
	I0408 23:35:57.398373    4680 fix.go:216] guest clock: 1744155357.426064067
	I0408 23:35:57.398373    4680 fix.go:229] Guest: 2025-04-08 23:35:57.426064067 +0000 UTC Remote: 2025-04-08 23:35:52.6743059 +0000 UTC m=+55.594526801 (delta=4.751758167s)
	I0408 23:35:57.398607    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:35:59.476535    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:35:59.476535    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:35:59.477439    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:36:01.946547    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:36:01.946755    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:36:01.952277    4680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:36:01.952431    4680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
	I0408 23:36:01.952431    4680 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1744155357
	I0408 23:36:02.109581    4680 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr  8 23:35:57 UTC 2025
	
	I0408 23:36:02.109581    4680 fix.go:236] clock set: Tue Apr  8 23:35:57 UTC 2025
	 (err=<nil>)
	I0408 23:36:02.109581    4680 start.go:83] releasing machines lock for "functional-618200", held for 59.5485681s
	I0408 23:36:02.110548    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:36:04.180009    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:36:04.180193    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:36:04.180261    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:36:06.679668    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:36:06.679668    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:36:06.684777    4680 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0408 23:36:06.684909    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:36:06.693996    4680 ssh_runner.go:195] Run: cat /version.json
	I0408 23:36:06.693996    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
	I0408 23:36:08.902982    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:36:08.903217    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:36:08.903217    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:36:08.911965    4680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:36:08.911965    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:36:08.911965    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
	I0408 23:36:11.559377    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:36:11.559377    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:36:11.560763    4680 sshutil.go:53] new ssh client: &{IP:192.168.113.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-618200\id_rsa Username:docker}
	I0408 23:36:11.579839    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37
	
	I0408 23:36:11.579839    4680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:36:11.580662    4680 sshutil.go:53] new ssh client: &{IP:192.168.113.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-618200\id_rsa Username:docker}
	I0408 23:36:11.653575    4680 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9686214s)
	W0408 23:36:11.653575    4680 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0408 23:36:11.671985    4680 ssh_runner.go:235] Completed: cat /version.json: (4.9779236s)
	I0408 23:36:11.686366    4680 ssh_runner.go:195] Run: systemctl --version
	I0408 23:36:11.708570    4680 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 23:36:11.717906    4680 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 23:36:11.728234    4680 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 23:36:11.747584    4680 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0408 23:36:11.747584    4680 start.go:495] detecting cgroup driver to use...
	I0408 23:36:11.747584    4680 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0408 23:36:11.768904    4680 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0408 23:36:11.768904    4680 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0408 23:36:11.797321    4680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0408 23:36:11.831085    4680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0408 23:36:11.849662    4680 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0408 23:36:11.861888    4680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0408 23:36:11.903580    4680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 23:36:11.943433    4680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0408 23:36:11.977323    4680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 23:36:12.012379    4680 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 23:36:12.046321    4680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0408 23:36:12.079535    4680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0408 23:36:12.110716    4680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0408 23:36:12.147517    4680 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 23:36:12.178928    4680 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 23:36:12.208351    4680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:36:12.410730    4680 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0408 23:36:12.439631    4680 start.go:495] detecting cgroup driver to use...
	I0408 23:36:12.451933    4680 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0408 23:36:12.488014    4680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 23:36:12.521384    4680 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 23:36:12.558160    4680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 23:36:12.599092    4680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0408 23:36:12.621759    4680 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 23:36:12.666043    4680 ssh_runner.go:195] Run: which cri-dockerd
	I0408 23:36:12.683104    4680 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0408 23:36:12.700086    4680 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0408 23:36:12.745200    4680 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0408 23:36:12.942898    4680 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0408 23:36:13.136518    4680 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0408 23:36:13.136518    4680 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0408 23:36:13.182679    4680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:36:13.412451    4680 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0408 23:37:45.325640    4680 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m31.911983s)
	I0408 23:37:45.337425    4680 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0408 23:37:45.409039    4680 out.go:201] 
	W0408 23:37:45.412124    4680 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 08 23:06:49 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.094333857Z" level=info msg="Starting up"
	Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.095749501Z" level=info msg="containerd not running, starting managed containerd"
	Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.097506580Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=673
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.128963677Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152469766Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152558876Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152717392Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152739794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152812201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152901110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153079328Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153169038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153187739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153197940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153293950Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153812303Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156561482Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156716198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156848512Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156952822Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.157044531Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.157169744Z" level=info msg="metadata content store policy set" policy=shared
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190389421Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190521734Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190544737Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190560338Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190576740Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190838067Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191154799Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191361820Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191472031Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191493633Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191512135Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191527737Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191541238Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191555639Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191571341Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191603144Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191615846Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191628447Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191749659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191774162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191800364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191815666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191830867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191844669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191857670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191870171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191882273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191897274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191908775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191920677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191932778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191947379Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191967081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191979383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191992484Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192114796Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192196605Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192262611Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192291214Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192304416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192318917Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192331918Z" level=info msg="NRI interface is disabled by configuration."
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193151202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193285015Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193371424Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193820570Z" level=info msg="containerd successfully booted in 0.066941s"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.170474987Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.203429127Z" level=info msg="Loading containers: start."
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.350665658Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.583414712Z" level=info msg="Loading containers: done."
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.608611503Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.608776419Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.609056647Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.609260067Z" level=info msg="Daemon has completed initialization"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.713909013Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.714066029Z" level=info msg="API listen on [::]:2376"
	Apr 08 23:06:50 functional-618200 systemd[1]: Started Docker Application Container Engine.
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.811241096Z" level=info msg="Processing signal 'terminated'"
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813084503Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813257403Z" level=info msg="Daemon shutdown complete"
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813288003Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813374004Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 08 23:07:20 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	Apr 08 23:07:21 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	Apr 08 23:07:21 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:07:21 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.861204748Z" level=info msg="Starting up"
	Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.863521556Z" level=info msg="containerd not running, starting managed containerd"
	Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.864856161Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1097
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.891008554Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913514335Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913559535Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913591835Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913605435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913626835Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913637435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913748735Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913963436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913985636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913996836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.914019636Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.914159537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.916995847Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917087147Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917210048Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917295148Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917328148Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917346448Z" level=info msg="metadata content store policy set" policy=shared
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917634649Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917741950Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917760750Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917900050Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917914850Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917957150Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918196151Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918327452Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918413452Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918430852Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918442352Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918453152Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918462452Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918473352Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918484552Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918499152Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918509952Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918520052Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918543853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918558553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918568953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918579553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918589553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918609253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918626253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918638253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918657853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918673253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918682953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918692253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918702953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918715553Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918733953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918744753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918754653Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918959554Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919161355Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919325455Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919361655Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919372055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919407356Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919416356Z" level=info msg="NRI interface is disabled by configuration."
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919735157Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919968758Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.920117658Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.920171758Z" level=info msg="containerd successfully booted in 0.029982s"
	Apr 08 23:07:22 functional-618200 dockerd[1091]: time="2025-04-08T23:07:22.908709690Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 08 23:07:22 functional-618200 dockerd[1091]: time="2025-04-08T23:07:22.934950284Z" level=info msg="Loading containers: start."
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.062615440Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.175164242Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.282062124Z" level=info msg="Loading containers: done."
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.305666909Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.305777709Z" level=info msg="Daemon has completed initialization"
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.341856738Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 08 23:07:23 functional-618200 systemd[1]: Started Docker Application Container Engine.
	Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.343491744Z" level=info msg="API listen on [::]:2376"
	Apr 08 23:07:32 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.905143108Z" level=info msg="Processing signal 'terminated'"
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906371813Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906906114Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.907033815Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906918515Z" level=info msg="Daemon shutdown complete"
	Apr 08 23:07:33 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	Apr 08 23:07:33 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:07:33 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.955484761Z" level=info msg="Starting up"
	Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.957042767Z" level=info msg="containerd not running, starting managed containerd"
	Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.958462672Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1462
	Apr 08 23:07:33 functional-618200 dockerd[1462]: time="2025-04-08T23:07:33.983507761Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009132353Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009242353Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009307753Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009324953Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009354454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009383954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009545254Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009658655Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009680555Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009691855Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009717555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.010024356Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012580665Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012671765Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012945166Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013039867Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013070567Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013104967Z" level=info msg="metadata content store policy set" policy=shared
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013460968Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013562869Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013583269Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013598369Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013611569Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013659269Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014010570Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014156471Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014247371Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014266571Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014280071Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014397172Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014425272Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014441672Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014458272Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014472772Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014498972Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014515572Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014537972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014555672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014570972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014585972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014601072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014615672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014629372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014643572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014658573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014679173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014709673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014738473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014783273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014916873Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014942274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014955574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014969174Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015051774Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015092874Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015107074Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015122374Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015133174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015147174Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015158874Z" level=info msg="NRI interface is disabled by configuration."
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015573476Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015638476Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015690176Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015715476Z" level=info msg="containerd successfully booted in 0.033079s"
	Apr 08 23:07:35 functional-618200 dockerd[1456]: time="2025-04-08T23:07:35.262471031Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 08 23:07:37 functional-618200 dockerd[1456]: time="2025-04-08T23:07:37.762713164Z" level=info msg="Loading containers: start."
	Apr 08 23:07:37 functional-618200 dockerd[1456]: time="2025-04-08T23:07:37.897446846Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.015338367Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.153824862Z" level=info msg="Loading containers: done."
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.182692065Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.182937366Z" level=info msg="Daemon has completed initialization"
	Apr 08 23:07:38 functional-618200 systemd[1]: Started Docker Application Container Engine.
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.220981402Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.221045402Z" level=info msg="API listen on [::]:2376"
	Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928174323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928255628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928274329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928976471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011163114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011256119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011273420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011437330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.047888267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048098278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048281989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048657110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089143872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089470391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089714404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.090374541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.331240402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.331940241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.332248459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.332901095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587350115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587733437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587951349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.588255466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643351545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643476652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643513354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643620460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681369670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681570881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681658686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.682028307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.094044455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.094486867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.095561595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.097530446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394114311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394433319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394665025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.395349443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643182806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643370211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643392711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.645053352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216296816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216387017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216402117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216977424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.540620784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.540963288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.541044989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.541180590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.848480641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.850292361Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.850566464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.851150170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.385762643Z" level=info msg="Processing signal 'terminated'"
	Apr 08 23:09:27 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574335274Z" level=info msg="shim disconnected" id=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574507675Z" level=warning msg="cleaning up after shim disconnected" id=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574520575Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.575374478Z" level=info msg="ignoring event" container=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.602965785Z" level=info msg="ignoring event" container=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.603895489Z" level=info msg="shim disconnected" id=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.604175090Z" level=warning msg="cleaning up after shim disconnected" id=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.604242890Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614380530Z" level=info msg="shim disconnected" id=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614605231Z" level=warning msg="cleaning up after shim disconnected" id=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614742231Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.620402053Z" level=info msg="ignoring event" container=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.620802455Z" level=info msg="shim disconnected" id=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.621015255Z" level=info msg="ignoring event" container=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.621947059Z" level=info msg="ignoring event" container=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.622304660Z" level=info msg="ignoring event" container=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622827062Z" level=warning msg="cleaning up after shim disconnected" id=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.623203064Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622314560Z" level=info msg="shim disconnected" id=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.624293868Z" level=warning msg="cleaning up after shim disconnected" id=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.624306868Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622381461Z" level=info msg="shim disconnected" id=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.631193795Z" level=warning msg="cleaning up after shim disconnected" id=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.631249695Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.667400535Z" level=info msg="ignoring event" container=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.669623644Z" level=info msg="shim disconnected" id=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.672188454Z" level=warning msg="cleaning up after shim disconnected" id=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.672924657Z" level=info msg="ignoring event" container=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.673767960Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.681394990Z" level=info msg="ignoring event" container=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.681607190Z" level=info msg="ignoring event" container=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.681903492Z" level=info msg="shim disconnected" id=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.685272405Z" level=warning msg="cleaning up after shim disconnected" id=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.685407505Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.671723952Z" level=info msg="shim disconnected" id=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.693693137Z" level=warning msg="cleaning up after shim disconnected" id=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.693789338Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697563052Z" level=info msg="shim disconnected" id=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697641053Z" level=warning msg="cleaning up after shim disconnected" id=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697654453Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.725345060Z" level=info msg="ignoring event" container=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.725697262Z" level=info msg="shim disconnected" id=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.725980963Z" level=warning msg="cleaning up after shim disconnected" id=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.726206964Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.734018694Z" level=info msg="ignoring event" container=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.736798905Z" level=info msg="shim disconnected" id=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.737017505Z" level=warning msg="cleaning up after shim disconnected" id=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 namespace=moby
	Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.737255906Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:32 functional-618200 dockerd[1456]: time="2025-04-08T23:09:32.552363388Z" level=info msg="ignoring event" container=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556138103Z" level=info msg="shim disconnected" id=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c namespace=moby
	Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556756905Z" level=warning msg="cleaning up after shim disconnected" id=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c namespace=moby
	Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556921006Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.565876302Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.643029581Z" level=info msg="ignoring event" container=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.646699056Z" level=info msg="shim disconnected" id=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.647140153Z" level=warning msg="cleaning up after shim disconnected" id=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.647214253Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724363532Z" level=info msg="Daemon shutdown complete"
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724563130Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724658330Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724794029Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 08 23:09:38 functional-618200 systemd[1]: docker.service: Deactivated successfully.
	Apr 08 23:09:38 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:09:38 functional-618200 systemd[1]: docker.service: Consumed 4.925s CPU time.
	Apr 08 23:09:38 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:09:38 functional-618200 dockerd[3978]: time="2025-04-08T23:09:38.782261701Z" level=info msg="Starting up"
	Apr 08 23:10:38 functional-618200 dockerd[3978]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:10:38 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:10:38 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:10:38 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:10:38 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Apr 08 23:10:38 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:10:38 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:10:38 functional-618200 dockerd[4187]: time="2025-04-08T23:10:38.990065142Z" level=info msg="Starting up"
	Apr 08 23:11:39 functional-618200 dockerd[4187]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:11:39 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:11:39 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:11:39 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:11:39 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Apr 08 23:11:39 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:11:39 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:11:39 functional-618200 dockerd[4495]: time="2025-04-08T23:11:39.240374985Z" level=info msg="Starting up"
	Apr 08 23:12:39 functional-618200 dockerd[4495]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:12:39 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:12:39 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:12:39 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:12:39 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
	Apr 08 23:12:39 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:12:39 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:12:39 functional-618200 dockerd[4717]: time="2025-04-08T23:12:39.435825366Z" level=info msg="Starting up"
	Apr 08 23:13:39 functional-618200 dockerd[4717]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:13:39 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:13:39 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:13:39 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:13:39 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 4.
	Apr 08 23:13:39 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:13:39 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:13:39 functional-618200 dockerd[4937]: time="2025-04-08T23:13:39.647599381Z" level=info msg="Starting up"
	Apr 08 23:14:39 functional-618200 dockerd[4937]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:14:39 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:14:39 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:14:39 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:14:39 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 5.
	Apr 08 23:14:39 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:14:39 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:14:39 functional-618200 dockerd[5287]: time="2025-04-08T23:14:39.994059486Z" level=info msg="Starting up"
	Apr 08 23:15:40 functional-618200 dockerd[5287]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:15:40 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:15:40 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:15:40 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:15:40 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 6.
	Apr 08 23:15:40 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:15:40 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:15:40 functional-618200 dockerd[5511]: time="2025-04-08T23:15:40.241827213Z" level=info msg="Starting up"
	Apr 08 23:16:40 functional-618200 dockerd[5511]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:16:40 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:16:40 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:16:40 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:16:40 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 7.
	Apr 08 23:16:40 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:16:40 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:16:40 functional-618200 dockerd[5774]: time="2025-04-08T23:16:40.479744325Z" level=info msg="Starting up"
	Apr 08 23:17:40 functional-618200 dockerd[5774]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:17:40 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:17:40 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:17:40 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:17:40 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 8.
	Apr 08 23:17:40 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:17:40 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:17:40 functional-618200 dockerd[6010]: time="2025-04-08T23:17:40.734060234Z" level=info msg="Starting up"
	Apr 08 23:18:40 functional-618200 dockerd[6010]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:18:40 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:18:40 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:18:40 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:18:40 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 9.
	Apr 08 23:18:40 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:18:40 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:18:40 functional-618200 dockerd[6233]: time="2025-04-08T23:18:40.980938832Z" level=info msg="Starting up"
	Apr 08 23:19:41 functional-618200 dockerd[6233]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:19:41 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:19:41 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:19:41 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:19:41 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 10.
	Apr 08 23:19:41 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:19:41 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:19:41 functional-618200 dockerd[6451]: time="2025-04-08T23:19:41.243144928Z" level=info msg="Starting up"
	Apr 08 23:20:41 functional-618200 dockerd[6451]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:20:41 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:20:41 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:20:41 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:20:41 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 11.
	Apr 08 23:20:41 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:20:41 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:20:41 functional-618200 dockerd[6677]: time="2025-04-08T23:20:41.482548376Z" level=info msg="Starting up"
	Apr 08 23:21:41 functional-618200 dockerd[6677]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:21:41 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:21:41 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:21:41 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:21:41 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 12.
	Apr 08 23:21:41 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:21:41 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:21:41 functional-618200 dockerd[6897]: time="2025-04-08T23:21:41.739358273Z" level=info msg="Starting up"
	Apr 08 23:22:41 functional-618200 dockerd[6897]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:22:41 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:22:41 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:22:41 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:22:41 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 13.
	Apr 08 23:22:41 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:22:41 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:22:41 functional-618200 dockerd[7137]: time="2025-04-08T23:22:41.989317104Z" level=info msg="Starting up"
	Apr 08 23:23:42 functional-618200 dockerd[7137]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:23:42 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:23:42 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:23:42 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:23:42 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 14.
	Apr 08 23:23:42 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:23:42 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:23:42 functional-618200 dockerd[7388]: time="2025-04-08T23:23:42.246986404Z" level=info msg="Starting up"
	Apr 08 23:24:42 functional-618200 dockerd[7388]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:24:42 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:24:42 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:24:42 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:24:42 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 15.
	Apr 08 23:24:42 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:24:42 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:24:42 functional-618200 dockerd[7634]: time="2025-04-08T23:24:42.498712284Z" level=info msg="Starting up"
	Apr 08 23:25:42 functional-618200 dockerd[7634]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:25:42 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:25:42 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:25:42 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:25:42 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 16.
	Apr 08 23:25:42 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:25:42 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:25:42 functional-618200 dockerd[7865]: time="2025-04-08T23:25:42.733372335Z" level=info msg="Starting up"
	Apr 08 23:26:42 functional-618200 dockerd[7865]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:26:42 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:26:42 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:26:42 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:26:42 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 17.
	Apr 08 23:26:42 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:26:42 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:26:42 functional-618200 dockerd[8184]: time="2025-04-08T23:26:42.990759238Z" level=info msg="Starting up"
	Apr 08 23:27:43 functional-618200 dockerd[8184]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:27:43 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:27:43 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:27:43 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:27:43 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 18.
	Apr 08 23:27:43 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:27:43 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:27:43 functional-618200 dockerd[8413]: time="2025-04-08T23:27:43.200403383Z" level=info msg="Starting up"
	Apr 08 23:28:43 functional-618200 dockerd[8413]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:28:43 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:28:43 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:28:43 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:28:43 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 19.
	Apr 08 23:28:43 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:28:43 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:28:43 functional-618200 dockerd[8626]: time="2025-04-08T23:28:43.448813456Z" level=info msg="Starting up"
	Apr 08 23:29:43 functional-618200 dockerd[8626]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:29:43 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:29:43 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:29:43 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:29:43 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 20.
	Apr 08 23:29:43 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:29:43 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:29:43 functional-618200 dockerd[8971]: time="2025-04-08T23:29:43.729262267Z" level=info msg="Starting up"
	Apr 08 23:30:43 functional-618200 dockerd[8971]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:30:43 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:30:43 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:30:43 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:30:43 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 21.
	Apr 08 23:30:43 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:30:43 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:30:43 functional-618200 dockerd[9191]: time="2025-04-08T23:30:43.933489137Z" level=info msg="Starting up"
	Apr 08 23:31:43 functional-618200 dockerd[9191]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:31:43 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:31:43 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:31:43 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:31:44 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 22.
	Apr 08 23:31:44 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:31:44 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:31:44 functional-618200 dockerd[9408]: time="2025-04-08T23:31:44.168816618Z" level=info msg="Starting up"
	Apr 08 23:32:44 functional-618200 dockerd[9408]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:32:44 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:32:44 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:32:44 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:32:44 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 23.
	Apr 08 23:32:44 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:32:44 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:32:44 functional-618200 dockerd[9759]: time="2025-04-08T23:32:44.477366695Z" level=info msg="Starting up"
	Apr 08 23:33:44 functional-618200 dockerd[9759]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:33:44 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:33:44 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:33:44 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:33:44 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 24.
	Apr 08 23:33:44 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:33:44 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:33:44 functional-618200 dockerd[9976]: time="2025-04-08T23:33:44.668897222Z" level=info msg="Starting up"
	Apr 08 23:34:44 functional-618200 dockerd[9976]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:34:44 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:34:44 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:34:44 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:34:44 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 25.
	Apr 08 23:34:44 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:34:44 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:34:44 functional-618200 dockerd[10189]: time="2025-04-08T23:34:44.897317954Z" level=info msg="Starting up"
	Apr 08 23:35:44 functional-618200 dockerd[10189]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:35:44 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:35:44 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:35:44 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	Apr 08 23:35:45 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 26.
	Apr 08 23:35:45 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:35:45 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:35:45 functional-618200 dockerd[10580]: time="2025-04-08T23:35:45.235219924Z" level=info msg="Starting up"
	Apr 08 23:36:13 functional-618200 dockerd[10580]: time="2025-04-08T23:36:13.466116044Z" level=info msg="Processing signal 'terminated'"
	Apr 08 23:36:45 functional-618200 dockerd[10580]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:36:45 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:36:45 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:36:45 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 08 23:36:45 functional-618200 systemd[1]: Starting Docker Application Container Engine...
	Apr 08 23:36:45 functional-618200 dockerd[11011]: time="2025-04-08T23:36:45.327202140Z" level=info msg="Starting up"
	Apr 08 23:37:45 functional-618200 dockerd[11011]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 08 23:37:45 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 08 23:37:45 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 08 23:37:45 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0408 23:37:45.413293    4680 out.go:270] * 
	W0408 23:37:45.414464    4680 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0408 23:37:45.421072    4680 out.go:201] 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0408 23:44:46.964347    2824 logs.go:279] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:44:46.997019    2824 logs.go:279] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:44:47.028232    2824 logs.go:279] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:44:47.057224    2824 logs.go:279] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:44:47.088253    2824 logs.go:279] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:44:47.118814    2824 logs.go:279] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0408 23:44:47.151377    2824 logs.go:279] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
functional_test.go:1255: out/minikube-windows-amd64.exe -p functional-618200 logs failed: exit status 1
functional_test.go:1245: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
| Command |                                            Args                                             |       Profile        |       User        | Version |     Start Time      |      End Time       |
|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
| delete  | -p binary-mirror-831900                                                                     | binary-mirror-831900 | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:45 UTC | 08 Apr 25 22:45 UTC |
| addons  | disable dashboard -p                                                                        | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:45 UTC |                     |
|         | addons-582000                                                                               |                      |                   |         |                     |                     |
| addons  | enable dashboard -p                                                                         | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:45 UTC |                     |
|         | addons-582000                                                                               |                      |                   |         |                     |                     |
| start   | -p addons-582000 --wait=true                                                                | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:45 UTC | 08 Apr 25 22:53 UTC |
|         | --memory=4000 --alsologtostderr                                                             |                      |                   |         |                     |                     |
|         | --addons=registry                                                                           |                      |                   |         |                     |                     |
|         | --addons=metrics-server                                                                     |                      |                   |         |                     |                     |
|         | --addons=volumesnapshots                                                                    |                      |                   |         |                     |                     |
|         | --addons=csi-hostpath-driver                                                                |                      |                   |         |                     |                     |
|         | --addons=gcp-auth                                                                           |                      |                   |         |                     |                     |
|         | --addons=cloud-spanner                                                                      |                      |                   |         |                     |                     |
|         | --addons=inspektor-gadget                                                                   |                      |                   |         |                     |                     |
|         | --addons=nvidia-device-plugin                                                               |                      |                   |         |                     |                     |
|         | --addons=yakd --addons=volcano                                                              |                      |                   |         |                     |                     |
|         | --addons=amd-gpu-device-plugin                                                              |                      |                   |         |                     |                     |
|         | --driver=hyperv --addons=ingress                                                            |                      |                   |         |                     |                     |
|         | --addons=ingress-dns                                                                        |                      |                   |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                                        |                      |                   |         |                     |                     |
| addons  | addons-582000 addons disable                                                                | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:53 UTC | 08 Apr 25 22:54 UTC |
|         | volcano --alsologtostderr -v=1                                                              |                      |                   |         |                     |                     |
| addons  | addons-582000 addons disable                                                                | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:54 UTC | 08 Apr 25 22:54 UTC |
|         | gcp-auth --alsologtostderr                                                                  |                      |                   |         |                     |                     |
|         | -v=1                                                                                        |                      |                   |         |                     |                     |
| addons  | enable headlamp                                                                             | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:54 UTC | 08 Apr 25 22:55 UTC |
|         | -p addons-582000                                                                            |                      |                   |         |                     |                     |
|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
| addons  | addons-582000 addons                                                                        | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:55 UTC | 08 Apr 25 22:55 UTC |
|         | disable nvidia-device-plugin                                                                |                      |                   |         |                     |                     |
|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
| ssh     | addons-582000 ssh cat                                                                       | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:55 UTC | 08 Apr 25 22:55 UTC |
|         | /opt/local-path-provisioner/pvc-b0575234-bc82-4444-9a94-3c199462b7f7_default_test-pvc/file1 |                      |                   |         |                     |                     |
| ip      | addons-582000 ip                                                                            | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:55 UTC | 08 Apr 25 22:55 UTC |
| addons  | addons-582000 addons disable                                                                | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:55 UTC | 08 Apr 25 22:55 UTC |
|         | registry --alsologtostderr                                                                  |                      |                   |         |                     |                     |
|         | -v=1                                                                                        |                      |                   |         |                     |                     |
| addons  | addons-582000 addons disable                                                                | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:55 UTC | 08 Apr 25 22:56 UTC |
|         | storage-provisioner-rancher                                                                 |                      |                   |         |                     |                     |
|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
| addons  | addons-582000 addons                                                                        | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:55 UTC | 08 Apr 25 22:55 UTC |
|         | disable cloud-spanner                                                                       |                      |                   |         |                     |                     |
|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
| addons  | addons-582000 addons disable                                                                | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:55 UTC | 08 Apr 25 22:55 UTC |
|         | headlamp --alsologtostderr                                                                  |                      |                   |         |                     |                     |
|         | -v=1                                                                                        |                      |                   |         |                     |                     |
| addons  | addons-582000 addons                                                                        | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:55 UTC | 08 Apr 25 22:55 UTC |
|         | disable metrics-server                                                                      |                      |                   |         |                     |                     |
|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
| addons  | addons-582000 addons                                                                        | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:55 UTC | 08 Apr 25 22:56 UTC |
|         | disable inspektor-gadget                                                                    |                      |                   |         |                     |                     |
|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
| ssh     | addons-582000 ssh curl -s                                                                   | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:56 UTC | 08 Apr 25 22:56 UTC |
|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |                   |         |                     |                     |
|         | nginx.example.com'                                                                          |                      |                   |         |                     |                     |
| addons  | addons-582000 addons disable                                                                | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:56 UTC | 08 Apr 25 22:56 UTC |
|         | yakd --alsologtostderr -v=1                                                                 |                      |                   |         |                     |                     |
| addons  | addons-582000 addons                                                                        | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:56 UTC | 08 Apr 25 22:56 UTC |
|         | disable volumesnapshots                                                                     |                      |                   |         |                     |                     |
|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
| ip      | addons-582000 ip                                                                            | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:56 UTC | 08 Apr 25 22:56 UTC |
| addons  | addons-582000 addons disable                                                                | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:56 UTC | 08 Apr 25 22:56 UTC |
|         | ingress-dns --alsologtostderr                                                               |                      |                   |         |                     |                     |
|         | -v=1                                                                                        |                      |                   |         |                     |                     |
| addons  | addons-582000 addons                                                                        | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:56 UTC | 08 Apr 25 22:56 UTC |
|         | disable csi-hostpath-driver                                                                 |                      |                   |         |                     |                     |
|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
| addons  | addons-582000 addons disable                                                                | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:56 UTC | 08 Apr 25 22:57 UTC |
|         | ingress --alsologtostderr -v=1                                                              |                      |                   |         |                     |                     |
| stop    | -p addons-582000                                                                            | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:57 UTC | 08 Apr 25 22:57 UTC |
| addons  | enable dashboard -p                                                                         | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:57 UTC | 08 Apr 25 22:57 UTC |
|         | addons-582000                                                                               |                      |                   |         |                     |                     |
| addons  | disable dashboard -p                                                                        | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:57 UTC | 08 Apr 25 22:57 UTC |
|         | addons-582000                                                                               |                      |                   |         |                     |                     |
| addons  | disable gvisor -p                                                                           | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:57 UTC | 08 Apr 25 22:57 UTC |
|         | addons-582000                                                                               |                      |                   |         |                     |                     |
| delete  | -p addons-582000                                                                            | addons-582000        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:57 UTC | 08 Apr 25 22:58 UTC |
| start   | -p nospam-268300 -n=1 --memory=2250 --wait=false                                            | nospam-268300        | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:58 UTC | 08 Apr 25 23:01 UTC |
|         | --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300                       |                      |                   |         |                     |                     |
|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
| start   | nospam-268300 --log_dir                                                                     | nospam-268300        | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:01 UTC |                     |
|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300                                 |                      |                   |         |                     |                     |
|         | start --dry-run                                                                             |                      |                   |         |                     |                     |
| start   | nospam-268300 --log_dir                                                                     | nospam-268300        | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:01 UTC |                     |
|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300                                 |                      |                   |         |                     |                     |
|         | start --dry-run                                                                             |                      |                   |         |                     |                     |
| start   | nospam-268300 --log_dir                                                                     | nospam-268300        | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:02 UTC |                     |
|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300                                 |                      |                   |         |                     |                     |
|         | start --dry-run                                                                             |                      |                   |         |                     |                     |
| pause   | nospam-268300 --log_dir                                                                     | nospam-268300        | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:02 UTC | 08 Apr 25 23:02 UTC |
|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300                                 |                      |                   |         |                     |                     |
|         | pause                                                                                       |                      |                   |         |                     |                     |
| pause   | nospam-268300 --log_dir                                                                     | nospam-268300        | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:02 UTC | 08 Apr 25 23:02 UTC |
|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300                                 |                      |                   |         |                     |                     |
|         | pause                                                                                       |                      |                   |         |                     |                     |
| pause   | nospam-268300 --log_dir                                                                     | nospam-268300        | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:02 UTC | 08 Apr 25 23:03 UTC |
|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300                                 |                      |                   |         |                     |                     |
|         | pause                                                                                       |                      |                   |         |                     |                     |
| unpause | nospam-268300 --log_dir                                                                     | nospam-268300        | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:03 UTC | 08 Apr 25 23:03 UTC |
|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300                                 |                      |                   |         |                     |                     |
|         | unpause                                                                                     |                      |                   |         |                     |                     |
| unpause | nospam-268300 --log_dir                                                                     | nospam-268300        | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:03 UTC | 08 Apr 25 23:03 UTC |
|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300                                 |                      |                   |         |                     |                     |
|         | unpause                                                                                     |                      |                   |         |                     |                     |
| unpause | nospam-268300 --log_dir                                                                     | nospam-268300        | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:03 UTC | 08 Apr 25 23:03 UTC |
|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300                                 |                      |                   |         |                     |                     |
|         | unpause                                                                                     |                      |                   |         |                     |                     |
| stop    | nospam-268300 --log_dir                                                                     | nospam-268300        | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:03 UTC | 08 Apr 25 23:04 UTC |
|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300                                 |                      |                   |         |                     |                     |
|         | stop                                                                                        |                      |                   |         |                     |                     |
| stop    | nospam-268300 --log_dir                                                                     | nospam-268300        | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:04 UTC | 08 Apr 25 23:04 UTC |
|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300                                 |                      |                   |         |                     |                     |
|         | stop                                                                                        |                      |                   |         |                     |                     |
| stop    | nospam-268300 --log_dir                                                                     | nospam-268300        | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:04 UTC | 08 Apr 25 23:04 UTC |
|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300                                 |                      |                   |         |                     |                     |
|         | stop                                                                                        |                      |                   |         |                     |                     |
| delete  | -p nospam-268300                                                                            | nospam-268300        | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:04 UTC | 08 Apr 25 23:04 UTC |
| start   | -p functional-618200                                                                        | functional-618200    | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:04 UTC | 08 Apr 25 23:08 UTC |
|         | --memory=4000                                                                               |                      |                   |         |                     |                     |
|         | --apiserver-port=8441                                                                       |                      |                   |         |                     |                     |
|         | --wait=all --driver=hyperv                                                                  |                      |                   |         |                     |                     |
| start   | -p functional-618200                                                                        | functional-618200    | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:08 UTC |                     |
|         | --alsologtostderr -v=8                                                                      |                      |                   |         |                     |                     |
| cache   | functional-618200 cache add                                                                 | functional-618200    | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:15 UTC | 08 Apr 25 23:17 UTC |
|         | registry.k8s.io/pause:3.1                                                                   |                      |                   |         |                     |                     |
| cache   | functional-618200 cache add                                                                 | functional-618200    | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:17 UTC | 08 Apr 25 23:19 UTC |
|         | registry.k8s.io/pause:3.3                                                                   |                      |                   |         |                     |                     |
| cache   | functional-618200 cache add                                                                 | functional-618200    | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:19 UTC | 08 Apr 25 23:21 UTC |
|         | registry.k8s.io/pause:latest                                                                |                      |                   |         |                     |                     |
| cache   | functional-618200 cache add                                                                 | functional-618200    | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:21 UTC | 08 Apr 25 23:22 UTC |
|         | minikube-local-cache-test:functional-618200                                                 |                      |                   |         |                     |                     |
| cache   | functional-618200 cache delete                                                              | functional-618200    | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:22 UTC | 08 Apr 25 23:22 UTC |
|         | minikube-local-cache-test:functional-618200                                                 |                      |                   |         |                     |                     |
| cache   | delete                                                                                      | minikube             | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:22 UTC | 08 Apr 25 23:22 UTC |
|         | registry.k8s.io/pause:3.3                                                                   |                      |                   |         |                     |                     |
| cache   | list                                                                                        | minikube             | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:22 UTC | 08 Apr 25 23:22 UTC |
| ssh     | functional-618200 ssh sudo                                                                  | functional-618200    | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:22 UTC |                     |
|         | crictl images                                                                               |                      |                   |         |                     |                     |
| ssh     | functional-618200                                                                           | functional-618200    | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:22 UTC |                     |
|         | ssh sudo docker rmi                                                                         |                      |                   |         |                     |                     |
|         | registry.k8s.io/pause:latest                                                                |                      |                   |         |                     |                     |
| ssh     | functional-618200 ssh                                                                       | functional-618200    | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:23 UTC |                     |
|         | sudo crictl inspecti                                                                        |                      |                   |         |                     |                     |
|         | registry.k8s.io/pause:latest                                                                |                      |                   |         |                     |                     |
| cache   | functional-618200 cache reload                                                              | functional-618200    | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:23 UTC | 08 Apr 25 23:25 UTC |
| ssh     | functional-618200 ssh                                                                       | functional-618200    | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:25 UTC |                     |
|         | sudo crictl inspecti                                                                        |                      |                   |         |                     |                     |
|         | registry.k8s.io/pause:latest                                                                |                      |                   |         |                     |                     |
| cache   | delete                                                                                      | minikube             | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:25 UTC | 08 Apr 25 23:25 UTC |
|         | registry.k8s.io/pause:3.1                                                                   |                      |                   |         |                     |                     |
| cache   | delete                                                                                      | minikube             | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:25 UTC | 08 Apr 25 23:25 UTC |
|         | registry.k8s.io/pause:latest                                                                |                      |                   |         |                     |                     |
| kubectl | functional-618200 kubectl --                                                                | functional-618200    | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:28 UTC |                     |
|         | --context functional-618200                                                                 |                      |                   |         |                     |                     |
|         | get pods                                                                                    |                      |                   |         |                     |                     |
| start   | -p functional-618200                                                                        | functional-618200    | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:34 UTC |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision                    |                      |                   |         |                     |                     |
|         | --wait=all                                                                                  |                      |                   |         |                     |                     |
|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2025/04/08 23:34:57
Running on machine: minikube6
Binary: Built with gc go1.24.0 for windows/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0408 23:34:57.160655    4680 out.go:345] Setting OutFile to fd 1364 ...
I0408 23:34:57.227306    4680 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0408 23:34:57.227306    4680 out.go:358] Setting ErrFile to fd 1372...
I0408 23:34:57.227306    4680 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0408 23:34:57.246367    4680 out.go:352] Setting JSON to false
I0408 23:34:57.249336    4680 start.go:129] hostinfo: {"hostname":"minikube6","uptime":12294,"bootTime":1744143002,"procs":175,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5679 Build 19045.5679","kernelVersion":"10.0.19045.5679 Build 19045.5679","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
W0408 23:34:57.250337    4680 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0408 23:34:57.254337    4680 out.go:177] * [functional-618200] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
I0408 23:34:57.259337    4680 notify.go:220] Checking for updates...
I0408 23:34:57.259337    4680 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
I0408 23:34:57.262420    4680 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0408 23:34:57.265869    4680 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
I0408 23:34:57.268764    4680 out.go:177]   - MINIKUBE_LOCATION=20501
I0408 23:34:57.271844    4680 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0408 23:34:57.274915    4680 config.go:182] Loaded profile config "functional-618200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0408 23:34:57.275745    4680 driver.go:404] Setting default libvirt URI to qemu:///system
I0408 23:35:02.492013    4680 out.go:177] * Using the hyperv driver based on existing profile
I0408 23:35:02.497227    4680 start.go:297] selected driver: hyperv
I0408 23:35:02.497227    4680 start.go:901] validating driver "hyperv" against &{Name:functional-618200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 Cluste
rName:functional-618200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.113.37 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docke
r MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0408 23:35:02.497227    4680 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0408 23:35:02.546322    4680 start_flags.go:975] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0408 23:35:02.546322    4680 cni.go:84] Creating CNI manager for ""
I0408 23:35:02.547269    4680 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0408 23:35:02.547269    4680 start.go:340] cluster config:
{Name:functional-618200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-618200 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.113.37 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0408 23:35:02.547269    4680 iso.go:125] acquiring lock: {Name:mk49322cc4182124f5e9cd1631076166a7ff463d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0408 23:35:02.554319    4680 out.go:177] * Starting "functional-618200" primary control-plane node in "functional-618200" cluster
I0408 23:35:02.556281    4680 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
I0408 23:35:02.557271    4680 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
I0408 23:35:02.557271    4680 cache.go:56] Caching tarball of preloaded images
I0408 23:35:02.557271    4680 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0408 23:35:02.557271    4680 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
I0408 23:35:02.557271    4680 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-618200\config.json ...
I0408 23:35:02.560231    4680 start.go:360] acquireMachinesLock for functional-618200: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0408 23:35:02.560231    4680 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-618200"
I0408 23:35:02.560231    4680 start.go:96] Skipping create...Using existing machine configuration
I0408 23:35:02.560231    4680 fix.go:54] fixHost starting: 
I0408 23:35:02.560231    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
I0408 23:35:05.222913    4680 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0408 23:35:05.222913    4680 main.go:141] libmachine: [stderr =====>] : 
I0408 23:35:05.222999    4680 fix.go:112] recreateIfNeeded on functional-618200: state=Running err=<nil>
W0408 23:35:05.222999    4680 fix.go:138] unexpected machine state, will restart: <nil>
I0408 23:35:05.225946    4680 out.go:177] * Updating the running hyperv "functional-618200" VM ...
I0408 23:35:05.230009    4680 machine.go:93] provisionDockerMachine start ...
I0408 23:35:05.230204    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
I0408 23:35:07.291911    4680 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0408 23:35:07.292084    4680 main.go:141] libmachine: [stderr =====>] : 
I0408 23:35:07.292225    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
I0408 23:35:09.764896    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37

                                                
                                                
I0408 23:35:09.764896    4680 main.go:141] libmachine: [stderr =====>] : 
I0408 23:35:09.772026    4680 main.go:141] libmachine: Using SSH client type: native
I0408 23:35:09.772916    4680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
I0408 23:35:09.772916    4680 main.go:141] libmachine: About to run SSH command:
hostname
I0408 23:35:09.909726    4680 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-618200

                                                
                                                
I0408 23:35:09.909912    4680 buildroot.go:166] provisioning hostname "functional-618200"
I0408 23:35:09.909912    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
I0408 23:35:11.997581    4680 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0408 23:35:11.997581    4680 main.go:141] libmachine: [stderr =====>] : 
I0408 23:35:11.998187    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
I0408 23:35:14.437911    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37

                                                
                                                
I0408 23:35:14.437911    4680 main.go:141] libmachine: [stderr =====>] : 
I0408 23:35:14.443507    4680 main.go:141] libmachine: Using SSH client type: native
I0408 23:35:14.444263    4680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
I0408 23:35:14.444331    4680 main.go:141] libmachine: About to run SSH command:
sudo hostname functional-618200 && echo "functional-618200" | sudo tee /etc/hostname
I0408 23:35:14.603359    4680 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-618200

                                                
                                                
I0408 23:35:14.603469    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
I0408 23:35:16.670523    4680 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0408 23:35:16.671534    4680 main.go:141] libmachine: [stderr =====>] : 
I0408 23:35:16.671557    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
I0408 23:35:19.147238    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37

                                                
                                                
I0408 23:35:19.147238    4680 main.go:141] libmachine: [stderr =====>] : 
I0408 23:35:19.153778    4680 main.go:141] libmachine: Using SSH client type: native
I0408 23:35:19.154064    4680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
I0408 23:35:19.154064    4680 main.go:141] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\sfunctional-618200' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-618200/g' /etc/hosts;
			else 
				echo '127.0.1.1 functional-618200' | sudo tee -a /etc/hosts; 
			fi
		fi
I0408 23:35:19.293655    4680 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0408 23:35:19.293818    4680 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
I0408 23:35:19.293818    4680 buildroot.go:174] setting up certificates
I0408 23:35:19.293918    4680 provision.go:84] configureAuth start
I0408 23:35:19.293918    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
I0408 23:35:21.418011    4680 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0408 23:35:21.418011    4680 main.go:141] libmachine: [stderr =====>] : 
I0408 23:35:21.418011    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
I0408 23:35:23.915067    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37

                                                
                                                
I0408 23:35:23.915750    4680 main.go:141] libmachine: [stderr =====>] : 
I0408 23:35:23.915843    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
I0408 23:35:26.054110    4680 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0408 23:35:26.054110    4680 main.go:141] libmachine: [stderr =====>] : 
I0408 23:35:26.054245    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
I0408 23:35:28.570897    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37

                                                
                                                
I0408 23:35:28.570897    4680 main.go:141] libmachine: [stderr =====>] : 
I0408 23:35:28.570979    4680 provision.go:143] copyHostCerts
I0408 23:35:28.571441    4680 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
I0408 23:35:28.571441    4680 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
I0408 23:35:28.572091    4680 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
I0408 23:35:28.573882    4680 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
I0408 23:35:28.573882    4680 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
I0408 23:35:28.574303    4680 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
I0408 23:35:28.575503    4680 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
I0408 23:35:28.575503    4680 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
I0408 23:35:28.575803    4680 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
I0408 23:35:28.576584    4680 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-618200 san=[127.0.0.1 192.168.113.37 functional-618200 localhost minikube]
I0408 23:35:28.959411    4680 provision.go:177] copyRemoteCerts
I0408 23:35:28.968415    4680 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0408 23:35:28.968415    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
I0408 23:35:31.020048    4680 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0408 23:35:31.020048    4680 main.go:141] libmachine: [stderr =====>] : 
I0408 23:35:31.020798    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
I0408 23:35:33.462537    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37

                                                
                                                
I0408 23:35:33.462537    4680 main.go:141] libmachine: [stderr =====>] : 
I0408 23:35:33.462537    4680 sshutil.go:53] new ssh client: &{IP:192.168.113.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-618200\id_rsa Username:docker}
I0408 23:35:33.576960    4680 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6084838s)
I0408 23:35:33.577533    4680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0408 23:35:33.623672    4680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
I0408 23:35:33.670466    4680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0408 23:35:33.717400    4680 provision.go:87] duration metric: took 14.4232931s to configureAuth
I0408 23:35:33.717400    4680 buildroot.go:189] setting minikube options for container-runtime
I0408 23:35:33.717979    4680 config.go:182] Loaded profile config "functional-618200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0408 23:35:33.718051    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
I0408 23:35:35.820801    4680 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0408 23:35:35.821878    4680 main.go:141] libmachine: [stderr =====>] : 
I0408 23:35:35.822118    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
I0408 23:35:38.293353    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37

                                                
                                                
I0408 23:35:38.293353    4680 main.go:141] libmachine: [stderr =====>] : 
I0408 23:35:38.299330    4680 main.go:141] libmachine: Using SSH client type: native
I0408 23:35:38.300018    4680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
I0408 23:35:38.300018    4680 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0408 23:35:38.425797    4680 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs

                                                
                                                
I0408 23:35:38.425797    4680 buildroot.go:70] root file system type: tmpfs
I0408 23:35:38.426995    4680 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0408 23:35:38.427061    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
I0408 23:35:40.452569    4680 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0408 23:35:40.452569    4680 main.go:141] libmachine: [stderr =====>] : 
I0408 23:35:40.452796    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
I0408 23:35:42.927371    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37

                                                
                                                
I0408 23:35:42.927371    4680 main.go:141] libmachine: [stderr =====>] : 
I0408 23:35:42.934515    4680 main.go:141] libmachine: Using SSH client type: native
I0408 23:35:42.935261    4680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
I0408 23:35:42.935261    4680 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0408 23:35:43.086612    4680 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target

                                                
                                                
I0408 23:35:43.086740    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
I0408 23:35:45.178050    4680 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0408 23:35:45.178179    4680 main.go:141] libmachine: [stderr =====>] : 
I0408 23:35:45.178179    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
I0408 23:35:47.646488    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37

                                                
                                                
I0408 23:35:47.647562    4680 main.go:141] libmachine: [stderr =====>] : 
I0408 23:35:47.653138    4680 main.go:141] libmachine: Using SSH client type: native
I0408 23:35:47.653919    4680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
I0408 23:35:47.653919    4680 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0408 23:35:47.796320    4680 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0408 23:35:47.796320    4680 machine.go:96] duration metric: took 42.5657539s to provisionDockerMachine
I0408 23:35:47.796320    4680 start.go:293] postStartSetup for "functional-618200" (driver="hyperv")
I0408 23:35:47.796508    4680 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0408 23:35:47.808373    4680 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0408 23:35:47.808373    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
I0408 23:35:49.907410    4680 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0408 23:35:49.907410    4680 main.go:141] libmachine: [stderr =====>] : 
I0408 23:35:49.907410    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
I0408 23:35:52.435264    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37

                                                
                                                
I0408 23:35:52.435264    4680 main.go:141] libmachine: [stderr =====>] : 
I0408 23:35:52.436078    4680 sshutil.go:53] new ssh client: &{IP:192.168.113.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-618200\id_rsa Username:docker}
I0408 23:35:52.536680    4680 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7282442s)
I0408 23:35:52.550709    4680 ssh_runner.go:195] Run: cat /etc/os-release
I0408 23:35:52.557305    4680 info.go:137] Remote host: Buildroot 2023.02.9
I0408 23:35:52.557354    4680 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
I0408 23:35:52.558201    4680 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
I0408 23:35:52.560040    4680 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> 98642.pem in /etc/ssl/certs
I0408 23:35:52.561052    4680 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9864\hosts -> hosts in /etc/test/nested/copy/9864
I0408 23:35:52.572449    4680 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/9864
I0408 23:35:52.591479    4680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem --> /etc/ssl/certs/98642.pem (1708 bytes)
I0408 23:35:52.632158    4680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9864\hosts --> /etc/test/nested/copy/9864/hosts (40 bytes)
I0408 23:35:52.674167    4680 start.go:296] duration metric: took 4.8777819s for postStartSetup
I0408 23:35:52.674305    4680 fix.go:56] duration metric: took 50.113417s for fixHost
I0408 23:35:52.674384    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
I0408 23:35:54.767684    4680 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0408 23:35:54.767684    4680 main.go:141] libmachine: [stderr =====>] : 
I0408 23:35:54.767684    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
I0408 23:35:57.261834    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37

                                                
                                                
I0408 23:35:57.261834    4680 main.go:141] libmachine: [stderr =====>] : 
I0408 23:35:57.271187    4680 main.go:141] libmachine: Using SSH client type: native
I0408 23:35:57.271187    4680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
I0408 23:35:57.271187    4680 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0408 23:35:57.398373    4680 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744155357.426064067

                                                
                                                
I0408 23:35:57.398373    4680 fix.go:216] guest clock: 1744155357.426064067
I0408 23:35:57.398373    4680 fix.go:229] Guest: 2025-04-08 23:35:57.426064067 +0000 UTC Remote: 2025-04-08 23:35:52.6743059 +0000 UTC m=+55.594526801 (delta=4.751758167s)
I0408 23:35:57.398607    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
I0408 23:35:59.476535    4680 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0408 23:35:59.476535    4680 main.go:141] libmachine: [stderr =====>] : 
I0408 23:35:59.477439    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
I0408 23:36:01.946547    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37

                                                
                                                
I0408 23:36:01.946755    4680 main.go:141] libmachine: [stderr =====>] : 
I0408 23:36:01.952277    4680 main.go:141] libmachine: Using SSH client type: native
I0408 23:36:01.952431    4680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.37 22 <nil> <nil>}
I0408 23:36:01.952431    4680 main.go:141] libmachine: About to run SSH command:
sudo date -s @1744155357
I0408 23:36:02.109581    4680 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr  8 23:35:57 UTC 2025

                                                
                                                
I0408 23:36:02.109581    4680 fix.go:236] clock set: Tue Apr  8 23:35:57 UTC 2025
(err=<nil>)
I0408 23:36:02.109581    4680 start.go:83] releasing machines lock for "functional-618200", held for 59.5485681s
I0408 23:36:02.110548    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
I0408 23:36:04.180009    4680 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0408 23:36:04.180193    4680 main.go:141] libmachine: [stderr =====>] : 
I0408 23:36:04.180261    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
I0408 23:36:06.679668    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37

                                                
                                                
I0408 23:36:06.679668    4680 main.go:141] libmachine: [stderr =====>] : 
I0408 23:36:06.684777    4680 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
I0408 23:36:06.684909    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
I0408 23:36:06.693996    4680 ssh_runner.go:195] Run: cat /version.json
I0408 23:36:06.693996    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-618200 ).state
I0408 23:36:08.902982    4680 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0408 23:36:08.903217    4680 main.go:141] libmachine: [stderr =====>] : 
I0408 23:36:08.903217    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
I0408 23:36:08.911965    4680 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0408 23:36:08.911965    4680 main.go:141] libmachine: [stderr =====>] : 
I0408 23:36:08.911965    4680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-618200 ).networkadapters[0]).ipaddresses[0]
I0408 23:36:11.559377    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37

                                                
                                                
I0408 23:36:11.559377    4680 main.go:141] libmachine: [stderr =====>] : 
I0408 23:36:11.560763    4680 sshutil.go:53] new ssh client: &{IP:192.168.113.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-618200\id_rsa Username:docker}
I0408 23:36:11.579839    4680 main.go:141] libmachine: [stdout =====>] : 192.168.113.37

                                                
                                                
I0408 23:36:11.579839    4680 main.go:141] libmachine: [stderr =====>] : 
I0408 23:36:11.580662    4680 sshutil.go:53] new ssh client: &{IP:192.168.113.37 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-618200\id_rsa Username:docker}
I0408 23:36:11.653575    4680 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9686214s)
W0408 23:36:11.653575    4680 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
stdout:

                                                
                                                
stderr:
bash: line 1: curl.exe: command not found
I0408 23:36:11.671985    4680 ssh_runner.go:235] Completed: cat /version.json: (4.9779236s)
I0408 23:36:11.686366    4680 ssh_runner.go:195] Run: systemctl --version
I0408 23:36:11.708570    4680 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0408 23:36:11.717906    4680 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0408 23:36:11.728234    4680 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0408 23:36:11.747584    4680 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0408 23:36:11.747584    4680 start.go:495] detecting cgroup driver to use...
I0408 23:36:11.747584    4680 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
W0408 23:36:11.768904    4680 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
W0408 23:36:11.768904    4680 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
I0408 23:36:11.797321    4680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0408 23:36:11.831085    4680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0408 23:36:11.849662    4680 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0408 23:36:11.861888    4680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0408 23:36:11.903580    4680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0408 23:36:11.943433    4680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0408 23:36:11.977323    4680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0408 23:36:12.012379    4680 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0408 23:36:12.046321    4680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0408 23:36:12.079535    4680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0408 23:36:12.110716    4680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0408 23:36:12.147517    4680 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0408 23:36:12.178928    4680 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0408 23:36:12.208351    4680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0408 23:36:12.410730    4680 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0408 23:36:12.439631    4680 start.go:495] detecting cgroup driver to use...
I0408 23:36:12.451933    4680 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0408 23:36:12.488014    4680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0408 23:36:12.521384    4680 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0408 23:36:12.558160    4680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0408 23:36:12.599092    4680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0408 23:36:12.621759    4680 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0408 23:36:12.666043    4680 ssh_runner.go:195] Run: which cri-dockerd
I0408 23:36:12.683104    4680 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0408 23:36:12.700086    4680 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I0408 23:36:12.745200    4680 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0408 23:36:12.942898    4680 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0408 23:36:13.136518    4680 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0408 23:36:13.136518    4680 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0408 23:36:13.182679    4680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0408 23:36:13.412451    4680 ssh_runner.go:195] Run: sudo systemctl restart docker
I0408 23:37:45.325640    4680 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m31.911983s)
I0408 23:37:45.337425    4680 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
I0408 23:37:45.409039    4680 out.go:201] 
W0408 23:37:45.412124    4680 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:

                                                
                                                
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.

                                                
                                                
sudo journalctl --no-pager -u docker:
-- stdout --
Apr 08 23:06:49 functional-618200 systemd[1]: Starting Docker Application Container Engine...
Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.094333857Z" level=info msg="Starting up"
Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.095749501Z" level=info msg="containerd not running, starting managed containerd"
Apr 08 23:06:49 functional-618200 dockerd[667]: time="2025-04-08T23:06:49.097506580Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=673
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.128963677Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152469766Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152558876Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152717392Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152739794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152812201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.152901110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153079328Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153169038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153187739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153197940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153293950Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.153812303Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156561482Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156716198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156848512Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.156952822Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.157044531Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.157169744Z" level=info msg="metadata content store policy set" policy=shared
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190389421Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190521734Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190544737Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190560338Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190576740Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.190838067Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191154799Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191361820Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191472031Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191493633Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191512135Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191527737Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191541238Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191555639Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191571341Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191603144Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191615846Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191628447Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191749659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191774162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191800364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191815666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191830867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191844669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191857670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191870171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191882273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191897274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191908775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191920677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191932778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191947379Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191967081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191979383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.191992484Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192114796Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192196605Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192262611Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192291214Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192304416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192318917Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.192331918Z" level=info msg="NRI interface is disabled by configuration."
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193151202Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193285015Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193371424Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Apr 08 23:06:49 functional-618200 dockerd[673]: time="2025-04-08T23:06:49.193820570Z" level=info msg="containerd successfully booted in 0.066941s"
Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.170474987Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.203429127Z" level=info msg="Loading containers: start."
Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.350665658Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.583414712Z" level=info msg="Loading containers: done."
Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.608611503Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.608776419Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.609056647Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.609260067Z" level=info msg="Daemon has completed initialization"
Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.713909013Z" level=info msg="API listen on /var/run/docker.sock"
Apr 08 23:06:50 functional-618200 dockerd[667]: time="2025-04-08T23:06:50.714066029Z" level=info msg="API listen on [::]:2376"
Apr 08 23:06:50 functional-618200 systemd[1]: Started Docker Application Container Engine.
Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.811241096Z" level=info msg="Processing signal 'terminated'"
Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813084503Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813257403Z" level=info msg="Daemon shutdown complete"
Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813288003Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Apr 08 23:07:20 functional-618200 dockerd[667]: time="2025-04-08T23:07:20.813374004Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Apr 08 23:07:20 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
Apr 08 23:07:21 functional-618200 systemd[1]: docker.service: Deactivated successfully.
Apr 08 23:07:21 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
Apr 08 23:07:21 functional-618200 systemd[1]: Starting Docker Application Container Engine...
Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.861204748Z" level=info msg="Starting up"
Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.863521556Z" level=info msg="containerd not running, starting managed containerd"
Apr 08 23:07:21 functional-618200 dockerd[1091]: time="2025-04-08T23:07:21.864856161Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1097
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.891008554Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913514335Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913559535Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913591835Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913605435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913626835Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913637435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913748735Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913963436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913985636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.913996836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.914019636Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.914159537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.916995847Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917087147Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917210048Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917295148Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917328148Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917346448Z" level=info msg="metadata content store policy set" policy=shared
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917634649Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917741950Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917760750Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917900050Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917914850Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.917957150Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918196151Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918327452Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918413452Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918430852Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918442352Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918453152Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918462452Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918473352Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918484552Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918499152Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918509952Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918520052Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918543853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918558553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918568953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918579553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918589553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918609253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918626253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918638253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918657853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918673253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918682953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918692253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918702953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918715553Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918733953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918744753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918754653Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.918959554Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919161355Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919325455Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919361655Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919372055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919407356Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919416356Z" level=info msg="NRI interface is disabled by configuration."
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919735157Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.919968758Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.920117658Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Apr 08 23:07:21 functional-618200 dockerd[1097]: time="2025-04-08T23:07:21.920171758Z" level=info msg="containerd successfully booted in 0.029982s"
Apr 08 23:07:22 functional-618200 dockerd[1091]: time="2025-04-08T23:07:22.908709690Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Apr 08 23:07:22 functional-618200 dockerd[1091]: time="2025-04-08T23:07:22.934950284Z" level=info msg="Loading containers: start."
Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.062615440Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.175164242Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.282062124Z" level=info msg="Loading containers: done."
Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.305666909Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.305777709Z" level=info msg="Daemon has completed initialization"
Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.341856738Z" level=info msg="API listen on /var/run/docker.sock"
Apr 08 23:07:23 functional-618200 systemd[1]: Started Docker Application Container Engine.
Apr 08 23:07:23 functional-618200 dockerd[1091]: time="2025-04-08T23:07:23.343491744Z" level=info msg="API listen on [::]:2376"
Apr 08 23:07:32 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.905143108Z" level=info msg="Processing signal 'terminated'"
Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906371813Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906906114Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.907033815Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Apr 08 23:07:32 functional-618200 dockerd[1091]: time="2025-04-08T23:07:32.906918515Z" level=info msg="Daemon shutdown complete"
Apr 08 23:07:33 functional-618200 systemd[1]: docker.service: Deactivated successfully.
Apr 08 23:07:33 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
Apr 08 23:07:33 functional-618200 systemd[1]: Starting Docker Application Container Engine...
Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.955484761Z" level=info msg="Starting up"
Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.957042767Z" level=info msg="containerd not running, starting managed containerd"
Apr 08 23:07:33 functional-618200 dockerd[1456]: time="2025-04-08T23:07:33.958462672Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1462
Apr 08 23:07:33 functional-618200 dockerd[1462]: time="2025-04-08T23:07:33.983507761Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009132353Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009242353Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009307753Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009324953Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009354454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009383954Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009545254Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009658655Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009680555Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009691855Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.009717555Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.010024356Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012580665Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012671765Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.012945166Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013039867Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013070567Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013104967Z" level=info msg="metadata content store policy set" policy=shared
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013460968Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013562869Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013583269Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013598369Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013611569Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.013659269Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014010570Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014156471Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014247371Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014266571Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014280071Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014397172Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014425272Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014441672Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014458272Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014472772Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014498972Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014515572Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014537972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014555672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014570972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014585972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014601072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014615672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014629372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014643572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014658573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014679173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014709673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014738473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014783273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014916873Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014942274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014955574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.014969174Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015051774Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015092874Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015107074Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015122374Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015133174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015147174Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015158874Z" level=info msg="NRI interface is disabled by configuration."
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015573476Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015638476Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015690176Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Apr 08 23:07:34 functional-618200 dockerd[1462]: time="2025-04-08T23:07:34.015715476Z" level=info msg="containerd successfully booted in 0.033079s"
Apr 08 23:07:35 functional-618200 dockerd[1456]: time="2025-04-08T23:07:35.262471031Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Apr 08 23:07:37 functional-618200 dockerd[1456]: time="2025-04-08T23:07:37.762713164Z" level=info msg="Loading containers: start."
Apr 08 23:07:37 functional-618200 dockerd[1456]: time="2025-04-08T23:07:37.897446846Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.015338367Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.153824862Z" level=info msg="Loading containers: done."
Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.182692065Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.182937366Z" level=info msg="Daemon has completed initialization"
Apr 08 23:07:38 functional-618200 systemd[1]: Started Docker Application Container Engine.
Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.220981402Z" level=info msg="API listen on /var/run/docker.sock"
Apr 08 23:07:38 functional-618200 dockerd[1456]: time="2025-04-08T23:07:38.221045402Z" level=info msg="API listen on [::]:2376"
Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928174323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928255628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928274329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 08 23:07:46 functional-618200 dockerd[1462]: time="2025-04-08T23:07:46.928976471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011163114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011256119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011273420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.011437330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.047888267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048098278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048281989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.048657110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089143872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089470391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.089714404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.090374541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.331240402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.331940241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.332248459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.332901095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587350115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587733437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.587951349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.588255466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643351545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643476652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643513354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.643620460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681369670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681570881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.681658686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 08 23:07:47 functional-618200 dockerd[1462]: time="2025-04-08T23:07:47.682028307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.094044455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.094486867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.095561595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.097530446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394114311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394433319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.394665025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 08 23:08:00 functional-618200 dockerd[1462]: time="2025-04-08T23:08:00.395349443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643182806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643370211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.643392711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 08 23:08:01 functional-618200 dockerd[1462]: time="2025-04-08T23:08:01.645053352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216296816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216387017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216402117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 08 23:08:02 functional-618200 dockerd[1462]: time="2025-04-08T23:08:02.216977424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.540620784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.540963288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.541044989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.541180590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.848480641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.850292361Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.850566464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 08 23:08:07 functional-618200 dockerd[1462]: time="2025-04-08T23:08:07.851150170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.385762643Z" level=info msg="Processing signal 'terminated'"
Apr 08 23:09:27 functional-618200 systemd[1]: Stopping Docker Application Container Engine...
Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574335274Z" level=info msg="shim disconnected" id=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae namespace=moby
Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574507675Z" level=warning msg="cleaning up after shim disconnected" id=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae namespace=moby
Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.574520575Z" level=info msg="cleaning up dead shim" namespace=moby
Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.575374478Z" level=info msg="ignoring event" container=61835377850b8ea72d0abdf0bad839688489e78029ee3cab43a4e3e6a68cbdae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.602965785Z" level=info msg="ignoring event" container=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.603895489Z" level=info msg="shim disconnected" id=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 namespace=moby
Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.604175090Z" level=warning msg="cleaning up after shim disconnected" id=1c2a6b93bc3f52915332cecc9e22ed7ff4e4d28f9268b07e8a81e685350c3e80 namespace=moby
Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.604242890Z" level=info msg="cleaning up dead shim" namespace=moby
Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614380530Z" level=info msg="shim disconnected" id=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc namespace=moby
Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614605231Z" level=warning msg="cleaning up after shim disconnected" id=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc namespace=moby
Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.614742231Z" level=info msg="cleaning up dead shim" namespace=moby
Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.620402053Z" level=info msg="ignoring event" container=b069db02a0938273a0d775acc607a21eec0f41ceaf25ce5179a8daa85b6978dc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.620802455Z" level=info msg="shim disconnected" id=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 namespace=moby
Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.621015255Z" level=info msg="ignoring event" container=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.621947059Z" level=info msg="ignoring event" container=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.622304660Z" level=info msg="ignoring event" container=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622827062Z" level=warning msg="cleaning up after shim disconnected" id=8102ad68035e206f6b6c8d47c28af6cf62136b25de1f8fb449414914050389e6 namespace=moby
Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.623203064Z" level=info msg="cleaning up dead shim" namespace=moby
Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622314560Z" level=info msg="shim disconnected" id=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c namespace=moby
Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.624293868Z" level=warning msg="cleaning up after shim disconnected" id=de461d4d0267bffdd3176c8b2bd8a334c7ff9c1d97faec6a083f1966190fbd0c namespace=moby
Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.624306868Z" level=info msg="cleaning up dead shim" namespace=moby
Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.622381461Z" level=info msg="shim disconnected" id=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c namespace=moby
Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.631193795Z" level=warning msg="cleaning up after shim disconnected" id=3f2dd924912839294c2be46fe66e19d7b421577ade3627da8f2f121ac861228c namespace=moby
Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.631249695Z" level=info msg="cleaning up dead shim" namespace=moby
Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.667400535Z" level=info msg="ignoring event" container=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.669623644Z" level=info msg="shim disconnected" id=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 namespace=moby
Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.672188454Z" level=warning msg="cleaning up after shim disconnected" id=d4e0d79c07613ac8475984a75f8ff30dbf2d06e007bf94c0efa7bc0d6e34dd61 namespace=moby
Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.672924657Z" level=info msg="ignoring event" container=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.673767960Z" level=info msg="cleaning up dead shim" namespace=moby
Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.681394990Z" level=info msg="ignoring event" container=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.681607190Z" level=info msg="ignoring event" container=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.681903492Z" level=info msg="shim disconnected" id=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 namespace=moby
Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.685272405Z" level=warning msg="cleaning up after shim disconnected" id=48326e30120aa274edc2312471becf152696a3cfefed42cdc9c75dad20960245 namespace=moby
Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.685407505Z" level=info msg="cleaning up dead shim" namespace=moby
Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.671723952Z" level=info msg="shim disconnected" id=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee namespace=moby
Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.693693137Z" level=warning msg="cleaning up after shim disconnected" id=d1dcef59f8d37144dfa16ecef448246f40c51935aa3100d9f8db17537dbd25ee namespace=moby
Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.693789338Z" level=info msg="cleaning up dead shim" namespace=moby
Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697563052Z" level=info msg="shim disconnected" id=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c namespace=moby
Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697641053Z" level=warning msg="cleaning up after shim disconnected" id=3c17f37fdb3fa550870bc25a11d6e53cacfb0e594dbe418910a7f56e203d919c namespace=moby
Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.697654453Z" level=info msg="cleaning up dead shim" namespace=moby
Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.725345060Z" level=info msg="ignoring event" container=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.725697262Z" level=info msg="shim disconnected" id=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa namespace=moby
Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.725980963Z" level=warning msg="cleaning up after shim disconnected" id=a7d7cb2ac406c97161b1f5a59668707ed76867ed89331e87e900d7ec76a3a2aa namespace=moby
Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.726206964Z" level=info msg="cleaning up dead shim" namespace=moby
Apr 08 23:09:27 functional-618200 dockerd[1456]: time="2025-04-08T23:09:27.734018694Z" level=info msg="ignoring event" container=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.736798905Z" level=info msg="shim disconnected" id=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 namespace=moby
Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.737017505Z" level=warning msg="cleaning up after shim disconnected" id=cc59361538985d8f9d1caeef619651bc6a772d67ae66b1b70783e92d08fac321 namespace=moby
Apr 08 23:09:27 functional-618200 dockerd[1462]: time="2025-04-08T23:09:27.737255906Z" level=info msg="cleaning up dead shim" namespace=moby
Apr 08 23:09:32 functional-618200 dockerd[1456]: time="2025-04-08T23:09:32.552363388Z" level=info msg="ignoring event" container=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556138103Z" level=info msg="shim disconnected" id=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c namespace=moby
Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556756905Z" level=warning msg="cleaning up after shim disconnected" id=e8cc3adcf777ab37a46a0efcbcc485dd09960a6cf45125410aeb00a6f6a1099c namespace=moby
Apr 08 23:09:32 functional-618200 dockerd[1462]: time="2025-04-08T23:09:32.556921006Z" level=info msg="cleaning up dead shim" namespace=moby
Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.565876302Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f
Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.643029581Z" level=info msg="ignoring event" container=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.646699056Z" level=info msg="shim disconnected" id=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f namespace=moby
Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.647140153Z" level=warning msg="cleaning up after shim disconnected" id=bdb6045d8adb0266c15cfeb9f264b1dc172ce67b0e456bebca2b0f8efd33d62f namespace=moby
Apr 08 23:09:37 functional-618200 dockerd[1462]: time="2025-04-08T23:09:37.647214253Z" level=info msg="cleaning up dead shim" namespace=moby
Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724363532Z" level=info msg="Daemon shutdown complete"
Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724563130Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724658330Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Apr 08 23:09:37 functional-618200 dockerd[1456]: time="2025-04-08T23:09:37.724794029Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Apr 08 23:09:38 functional-618200 systemd[1]: docker.service: Deactivated successfully.
Apr 08 23:09:38 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
Apr 08 23:09:38 functional-618200 systemd[1]: docker.service: Consumed 4.925s CPU time.
Apr 08 23:09:38 functional-618200 systemd[1]: Starting Docker Application Container Engine...
Apr 08 23:09:38 functional-618200 dockerd[3978]: time="2025-04-08T23:09:38.782261701Z" level=info msg="Starting up"
Apr 08 23:10:38 functional-618200 dockerd[3978]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 08 23:10:38 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 08 23:10:38 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 08 23:10:38 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
Apr 08 23:10:38 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
Apr 08 23:10:38 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
Apr 08 23:10:38 functional-618200 systemd[1]: Starting Docker Application Container Engine...
Apr 08 23:10:38 functional-618200 dockerd[4187]: time="2025-04-08T23:10:38.990065142Z" level=info msg="Starting up"
Apr 08 23:11:39 functional-618200 dockerd[4187]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 08 23:11:39 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 08 23:11:39 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 08 23:11:39 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
Apr 08 23:11:39 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
Apr 08 23:11:39 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
Apr 08 23:11:39 functional-618200 systemd[1]: Starting Docker Application Container Engine...
Apr 08 23:11:39 functional-618200 dockerd[4495]: time="2025-04-08T23:11:39.240374985Z" level=info msg="Starting up"
Apr 08 23:12:39 functional-618200 dockerd[4495]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 08 23:12:39 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 08 23:12:39 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 08 23:12:39 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
Apr 08 23:12:39 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
Apr 08 23:12:39 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
Apr 08 23:12:39 functional-618200 systemd[1]: Starting Docker Application Container Engine...
Apr 08 23:12:39 functional-618200 dockerd[4717]: time="2025-04-08T23:12:39.435825366Z" level=info msg="Starting up"
Apr 08 23:13:39 functional-618200 dockerd[4717]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 08 23:13:39 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 08 23:13:39 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 08 23:13:39 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
Apr 08 23:13:39 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 4.
Apr 08 23:13:39 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
Apr 08 23:13:39 functional-618200 systemd[1]: Starting Docker Application Container Engine...
Apr 08 23:13:39 functional-618200 dockerd[4937]: time="2025-04-08T23:13:39.647599381Z" level=info msg="Starting up"
Apr 08 23:14:39 functional-618200 dockerd[4937]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 08 23:14:39 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 08 23:14:39 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 08 23:14:39 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
Apr 08 23:14:39 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 5.
Apr 08 23:14:39 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
Apr 08 23:14:39 functional-618200 systemd[1]: Starting Docker Application Container Engine...
Apr 08 23:14:39 functional-618200 dockerd[5287]: time="2025-04-08T23:14:39.994059486Z" level=info msg="Starting up"
Apr 08 23:15:40 functional-618200 dockerd[5287]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 08 23:15:40 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 08 23:15:40 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 08 23:15:40 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
Apr 08 23:15:40 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 6.
Apr 08 23:15:40 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
Apr 08 23:15:40 functional-618200 systemd[1]: Starting Docker Application Container Engine...
Apr 08 23:15:40 functional-618200 dockerd[5511]: time="2025-04-08T23:15:40.241827213Z" level=info msg="Starting up"
Apr 08 23:16:40 functional-618200 dockerd[5511]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 08 23:16:40 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 08 23:16:40 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 08 23:16:40 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
Apr 08 23:16:40 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 7.
Apr 08 23:16:40 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
Apr 08 23:16:40 functional-618200 systemd[1]: Starting Docker Application Container Engine...
Apr 08 23:16:40 functional-618200 dockerd[5774]: time="2025-04-08T23:16:40.479744325Z" level=info msg="Starting up"
Apr 08 23:17:40 functional-618200 dockerd[5774]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 08 23:17:40 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 08 23:17:40 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 08 23:17:40 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
Apr 08 23:17:40 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 8.
Apr 08 23:17:40 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
Apr 08 23:17:40 functional-618200 systemd[1]: Starting Docker Application Container Engine...
Apr 08 23:17:40 functional-618200 dockerd[6010]: time="2025-04-08T23:17:40.734060234Z" level=info msg="Starting up"
Apr 08 23:18:40 functional-618200 dockerd[6010]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 08 23:18:40 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 08 23:18:40 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 08 23:18:40 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
Apr 08 23:18:40 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 9.
Apr 08 23:18:40 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
Apr 08 23:18:40 functional-618200 systemd[1]: Starting Docker Application Container Engine...
Apr 08 23:18:40 functional-618200 dockerd[6233]: time="2025-04-08T23:18:40.980938832Z" level=info msg="Starting up"
Apr 08 23:19:41 functional-618200 dockerd[6233]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 08 23:19:41 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 08 23:19:41 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 08 23:19:41 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
Apr 08 23:19:41 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 10.
Apr 08 23:19:41 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
Apr 08 23:19:41 functional-618200 systemd[1]: Starting Docker Application Container Engine...
Apr 08 23:19:41 functional-618200 dockerd[6451]: time="2025-04-08T23:19:41.243144928Z" level=info msg="Starting up"
Apr 08 23:20:41 functional-618200 dockerd[6451]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 08 23:20:41 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 08 23:20:41 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 08 23:20:41 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
Apr 08 23:20:41 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 11.
Apr 08 23:20:41 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
Apr 08 23:20:41 functional-618200 systemd[1]: Starting Docker Application Container Engine...
Apr 08 23:20:41 functional-618200 dockerd[6677]: time="2025-04-08T23:20:41.482548376Z" level=info msg="Starting up"
Apr 08 23:21:41 functional-618200 dockerd[6677]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 08 23:21:41 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 08 23:21:41 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 08 23:21:41 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
Apr 08 23:21:41 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 12.
Apr 08 23:21:41 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
Apr 08 23:21:41 functional-618200 systemd[1]: Starting Docker Application Container Engine...
Apr 08 23:21:41 functional-618200 dockerd[6897]: time="2025-04-08T23:21:41.739358273Z" level=info msg="Starting up"
Apr 08 23:22:41 functional-618200 dockerd[6897]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 08 23:22:41 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 08 23:22:41 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 08 23:22:41 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
Apr 08 23:22:41 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 13.
Apr 08 23:22:41 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
Apr 08 23:22:41 functional-618200 systemd[1]: Starting Docker Application Container Engine...
Apr 08 23:22:41 functional-618200 dockerd[7137]: time="2025-04-08T23:22:41.989317104Z" level=info msg="Starting up"
Apr 08 23:23:42 functional-618200 dockerd[7137]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 08 23:23:42 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 08 23:23:42 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 08 23:23:42 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
Apr 08 23:23:42 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 14.
Apr 08 23:23:42 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
Apr 08 23:23:42 functional-618200 systemd[1]: Starting Docker Application Container Engine...
Apr 08 23:23:42 functional-618200 dockerd[7388]: time="2025-04-08T23:23:42.246986404Z" level=info msg="Starting up"
Apr 08 23:24:42 functional-618200 dockerd[7388]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 08 23:24:42 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 08 23:24:42 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 08 23:24:42 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
Apr 08 23:24:42 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 15.
Apr 08 23:24:42 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
Apr 08 23:24:42 functional-618200 systemd[1]: Starting Docker Application Container Engine...
Apr 08 23:24:42 functional-618200 dockerd[7634]: time="2025-04-08T23:24:42.498712284Z" level=info msg="Starting up"
Apr 08 23:25:42 functional-618200 dockerd[7634]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 08 23:25:42 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 08 23:25:42 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 08 23:25:42 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
Apr 08 23:25:42 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 16.
Apr 08 23:25:42 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
Apr 08 23:25:42 functional-618200 systemd[1]: Starting Docker Application Container Engine...
Apr 08 23:25:42 functional-618200 dockerd[7865]: time="2025-04-08T23:25:42.733372335Z" level=info msg="Starting up"
Apr 08 23:26:42 functional-618200 dockerd[7865]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 08 23:26:42 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 08 23:26:42 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 08 23:26:42 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
Apr 08 23:26:42 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 17.
Apr 08 23:26:42 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
Apr 08 23:26:42 functional-618200 systemd[1]: Starting Docker Application Container Engine...
Apr 08 23:26:42 functional-618200 dockerd[8184]: time="2025-04-08T23:26:42.990759238Z" level=info msg="Starting up"
Apr 08 23:27:43 functional-618200 dockerd[8184]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 08 23:27:43 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 08 23:27:43 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 08 23:27:43 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
Apr 08 23:27:43 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 18.
Apr 08 23:27:43 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
Apr 08 23:27:43 functional-618200 systemd[1]: Starting Docker Application Container Engine...
Apr 08 23:27:43 functional-618200 dockerd[8413]: time="2025-04-08T23:27:43.200403383Z" level=info msg="Starting up"
Apr 08 23:28:43 functional-618200 dockerd[8413]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 08 23:28:43 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 08 23:28:43 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 08 23:28:43 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
Apr 08 23:28:43 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 19.
Apr 08 23:28:43 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
Apr 08 23:28:43 functional-618200 systemd[1]: Starting Docker Application Container Engine...
Apr 08 23:28:43 functional-618200 dockerd[8626]: time="2025-04-08T23:28:43.448813456Z" level=info msg="Starting up"
Apr 08 23:29:43 functional-618200 dockerd[8626]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 08 23:29:43 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 08 23:29:43 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 08 23:29:43 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
Apr 08 23:29:43 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 20.
Apr 08 23:29:43 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
Apr 08 23:29:43 functional-618200 systemd[1]: Starting Docker Application Container Engine...
Apr 08 23:29:43 functional-618200 dockerd[8971]: time="2025-04-08T23:29:43.729262267Z" level=info msg="Starting up"
Apr 08 23:30:43 functional-618200 dockerd[8971]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 08 23:30:43 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 08 23:30:43 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 08 23:30:43 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
Apr 08 23:30:43 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 21.
Apr 08 23:30:43 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
Apr 08 23:30:43 functional-618200 systemd[1]: Starting Docker Application Container Engine...
Apr 08 23:30:43 functional-618200 dockerd[9191]: time="2025-04-08T23:30:43.933489137Z" level=info msg="Starting up"
Apr 08 23:31:43 functional-618200 dockerd[9191]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 08 23:31:43 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 08 23:31:43 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 08 23:31:43 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
Apr 08 23:31:44 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 22.
Apr 08 23:31:44 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
Apr 08 23:31:44 functional-618200 systemd[1]: Starting Docker Application Container Engine...
Apr 08 23:31:44 functional-618200 dockerd[9408]: time="2025-04-08T23:31:44.168816618Z" level=info msg="Starting up"
Apr 08 23:32:44 functional-618200 dockerd[9408]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 08 23:32:44 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 08 23:32:44 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 08 23:32:44 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
Apr 08 23:32:44 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 23.
Apr 08 23:32:44 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
Apr 08 23:32:44 functional-618200 systemd[1]: Starting Docker Application Container Engine...
Apr 08 23:32:44 functional-618200 dockerd[9759]: time="2025-04-08T23:32:44.477366695Z" level=info msg="Starting up"
Apr 08 23:33:44 functional-618200 dockerd[9759]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 08 23:33:44 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 08 23:33:44 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 08 23:33:44 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
Apr 08 23:33:44 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 24.
Apr 08 23:33:44 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
Apr 08 23:33:44 functional-618200 systemd[1]: Starting Docker Application Container Engine...
Apr 08 23:33:44 functional-618200 dockerd[9976]: time="2025-04-08T23:33:44.668897222Z" level=info msg="Starting up"
Apr 08 23:34:44 functional-618200 dockerd[9976]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 08 23:34:44 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 08 23:34:44 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 08 23:34:44 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
Apr 08 23:34:44 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 25.
Apr 08 23:34:44 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
Apr 08 23:34:44 functional-618200 systemd[1]: Starting Docker Application Container Engine...
Apr 08 23:34:44 functional-618200 dockerd[10189]: time="2025-04-08T23:34:44.897317954Z" level=info msg="Starting up"
Apr 08 23:35:44 functional-618200 dockerd[10189]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 08 23:35:44 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 08 23:35:44 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 08 23:35:44 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.
Apr 08 23:35:45 functional-618200 systemd[1]: docker.service: Scheduled restart job, restart counter is at 26.
Apr 08 23:35:45 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
Apr 08 23:35:45 functional-618200 systemd[1]: Starting Docker Application Container Engine...
Apr 08 23:35:45 functional-618200 dockerd[10580]: time="2025-04-08T23:35:45.235219924Z" level=info msg="Starting up"
Apr 08 23:36:13 functional-618200 dockerd[10580]: time="2025-04-08T23:36:13.466116044Z" level=info msg="Processing signal 'terminated'"
Apr 08 23:36:45 functional-618200 dockerd[10580]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 08 23:36:45 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 08 23:36:45 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 08 23:36:45 functional-618200 systemd[1]: Stopped Docker Application Container Engine.
Apr 08 23:36:45 functional-618200 systemd[1]: Starting Docker Application Container Engine...
Apr 08 23:36:45 functional-618200 dockerd[11011]: time="2025-04-08T23:36:45.327202140Z" level=info msg="Starting up"
Apr 08 23:37:45 functional-618200 dockerd[11011]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Apr 08 23:37:45 functional-618200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 08 23:37:45 functional-618200 systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 08 23:37:45 functional-618200 systemd[1]: Failed to start Docker Application Container Engine.

                                                
                                                
-- /stdout --
W0408 23:37:45.413293    4680 out.go:270] * 
W0408 23:37:45.414464    4680 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0408 23:37:45.421072    4680 out.go:201] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsCmd (51.35s)

                                                
                                    
x
+
TestFunctional/parallel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel
functional_test.go:186: Unable to run more tests (deadline exceeded)
--- FAIL: TestFunctional/parallel (0.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (69.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-061400 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-061400 -- exec busybox-58667487b6-8xfwm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-061400 -- exec busybox-58667487b6-8xfwm -- sh -c "ping -c 1 192.168.112.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-061400 -- exec busybox-58667487b6-8xfwm -- sh -c "ping -c 1 192.168.112.1": exit status 1 (10.5311548s)

                                                
                                                
-- stdout --
	PING 192.168.112.1 (192.168.112.1): 56 data bytes
	
	--- 192.168.112.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (192.168.112.1) from pod (busybox-58667487b6-8xfwm): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-061400 -- exec busybox-58667487b6-rjkqv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-061400 -- exec busybox-58667487b6-rjkqv -- sh -c "ping -c 1 192.168.112.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-061400 -- exec busybox-58667487b6-rjkqv -- sh -c "ping -c 1 192.168.112.1": exit status 1 (10.5434169s)

                                                
                                                
-- stdout --
	PING 192.168.112.1 (192.168.112.1): 56 data bytes
	
	--- 192.168.112.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (192.168.112.1) from pod (busybox-58667487b6-rjkqv): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-061400 -- exec busybox-58667487b6-rxp4w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-061400 -- exec busybox-58667487b6-rxp4w -- sh -c "ping -c 1 192.168.112.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-061400 -- exec busybox-58667487b6-rxp4w -- sh -c "ping -c 1 192.168.112.1": exit status 1 (10.5299373s)

                                                
                                                
-- stdout --
	PING 192.168.112.1 (192.168.112.1): 56 data bytes
	
	--- 192.168.112.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (192.168.112.1) from pod (busybox-58667487b6-rxp4w): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-061400 -n ha-061400
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-061400 -n ha-061400: (12.5470785s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 logs -n 25: (8.95952s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| cache   | delete                                                                   | minikube          | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:25 UTC | 08 Apr 25 23:25 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| kubectl | functional-618200 kubectl --                                             | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:28 UTC |                     |
	|         | --context functional-618200                                              |                   |                   |         |                     |                     |
	|         | get pods                                                                 |                   |                   |         |                     |                     |
	| start   | -p functional-618200                                                     | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:34 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |                   |         |                     |                     |
	|         | --wait=all                                                               |                   |                   |         |                     |                     |
	| delete  | -p functional-618200                                                     | functional-618200 | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:44 UTC | 08 Apr 25 23:46 UTC |
	| start   | -p ha-061400 --wait=true                                                 | ha-061400         | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:46 UTC | 08 Apr 25 23:57 UTC |
	|         | --memory=2200 --ha                                                       |                   |                   |         |                     |                     |
	|         | -v=7 --alsologtostderr                                                   |                   |                   |         |                     |                     |
	|         | --driver=hyperv                                                          |                   |                   |         |                     |                     |
	| kubectl | -p ha-061400 -- apply -f                                                 | ha-061400         | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:57 UTC | 08 Apr 25 23:57 UTC |
	|         | ./testdata/ha/ha-pod-dns-test.yaml                                       |                   |                   |         |                     |                     |
	| kubectl | -p ha-061400 -- rollout status                                           | ha-061400         | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:57 UTC | 08 Apr 25 23:58 UTC |
	|         | deployment/busybox                                                       |                   |                   |         |                     |                     |
	| kubectl | -p ha-061400 -- get pods -o                                              | ha-061400         | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:58 UTC | 08 Apr 25 23:58 UTC |
	|         | jsonpath='{.items[*].status.podIP}'                                      |                   |                   |         |                     |                     |
	| kubectl | -p ha-061400 -- get pods -o                                              | ha-061400         | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:58 UTC | 08 Apr 25 23:58 UTC |
	|         | jsonpath='{.items[*].metadata.name}'                                     |                   |                   |         |                     |                     |
	| kubectl | -p ha-061400 -- exec                                                     | ha-061400         | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:58 UTC | 08 Apr 25 23:58 UTC |
	|         | busybox-58667487b6-8xfwm --                                              |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io                                                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-061400 -- exec                                                     | ha-061400         | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:58 UTC | 08 Apr 25 23:58 UTC |
	|         | busybox-58667487b6-rjkqv --                                              |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io                                                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-061400 -- exec                                                     | ha-061400         | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:58 UTC | 08 Apr 25 23:58 UTC |
	|         | busybox-58667487b6-rxp4w --                                              |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io                                                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-061400 -- exec                                                     | ha-061400         | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:58 UTC | 08 Apr 25 23:58 UTC |
	|         | busybox-58667487b6-8xfwm --                                              |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default                                              |                   |                   |         |                     |                     |
	| kubectl | -p ha-061400 -- exec                                                     | ha-061400         | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:58 UTC | 08 Apr 25 23:58 UTC |
	|         | busybox-58667487b6-rjkqv --                                              |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default                                              |                   |                   |         |                     |                     |
	| kubectl | -p ha-061400 -- exec                                                     | ha-061400         | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:58 UTC | 08 Apr 25 23:58 UTC |
	|         | busybox-58667487b6-rxp4w --                                              |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default                                              |                   |                   |         |                     |                     |
	| kubectl | -p ha-061400 -- exec                                                     | ha-061400         | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:58 UTC | 08 Apr 25 23:58 UTC |
	|         | busybox-58667487b6-8xfwm -- nslookup                                     |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local                                     |                   |                   |         |                     |                     |
	| kubectl | -p ha-061400 -- exec                                                     | ha-061400         | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:58 UTC | 08 Apr 25 23:58 UTC |
	|         | busybox-58667487b6-rjkqv -- nslookup                                     |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local                                     |                   |                   |         |                     |                     |
	| kubectl | -p ha-061400 -- exec                                                     | ha-061400         | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:58 UTC | 08 Apr 25 23:58 UTC |
	|         | busybox-58667487b6-rxp4w -- nslookup                                     |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local                                     |                   |                   |         |                     |                     |
	| kubectl | -p ha-061400 -- get pods -o                                              | ha-061400         | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:58 UTC | 08 Apr 25 23:58 UTC |
	|         | jsonpath='{.items[*].metadata.name}'                                     |                   |                   |         |                     |                     |
	| kubectl | -p ha-061400 -- exec                                                     | ha-061400         | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:58 UTC | 08 Apr 25 23:58 UTC |
	|         | busybox-58667487b6-8xfwm                                                 |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                                                        |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk                                             |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                                                  |                   |                   |         |                     |                     |
	| kubectl | -p ha-061400 -- exec                                                     | ha-061400         | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:58 UTC |                     |
	|         | busybox-58667487b6-8xfwm -- sh                                           |                   |                   |         |                     |                     |
	|         | -c ping -c 1 192.168.112.1                                               |                   |                   |         |                     |                     |
	| kubectl | -p ha-061400 -- exec                                                     | ha-061400         | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:58 UTC | 08 Apr 25 23:58 UTC |
	|         | busybox-58667487b6-rjkqv                                                 |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                                                        |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk                                             |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                                                  |                   |                   |         |                     |                     |
	| kubectl | -p ha-061400 -- exec                                                     | ha-061400         | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:58 UTC |                     |
	|         | busybox-58667487b6-rjkqv -- sh                                           |                   |                   |         |                     |                     |
	|         | -c ping -c 1 192.168.112.1                                               |                   |                   |         |                     |                     |
	| kubectl | -p ha-061400 -- exec                                                     | ha-061400         | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:58 UTC | 08 Apr 25 23:58 UTC |
	|         | busybox-58667487b6-rxp4w                                                 |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                                                        |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk                                             |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                                                  |                   |                   |         |                     |                     |
	| kubectl | -p ha-061400 -- exec                                                     | ha-061400         | minikube6\jenkins | v1.35.0 | 08 Apr 25 23:58 UTC |                     |
	|         | busybox-58667487b6-rxp4w -- sh                                           |                   |                   |         |                     |                     |
	|         | -c ping -c 1 192.168.112.1                                               |                   |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/08 23:46:05
	Running on machine: minikube6
	Binary: Built with gc go1.24.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 23:46:05.713268    7680 out.go:345] Setting OutFile to fd 1072 ...
	I0408 23:46:05.782891    7680 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 23:46:05.782891    7680 out.go:358] Setting ErrFile to fd 1268...
	I0408 23:46:05.782891    7680 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 23:46:05.804615    7680 out.go:352] Setting JSON to false
	I0408 23:46:05.807921    7680 start.go:129] hostinfo: {"hostname":"minikube6","uptime":12963,"bootTime":1744143002,"procs":175,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5679 Build 19045.5679","kernelVersion":"10.0.19045.5679 Build 19045.5679","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0408 23:46:05.807921    7680 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 23:46:05.812960    7680 out.go:177] * [ha-061400] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
	I0408 23:46:05.817953    7680 notify.go:220] Checking for updates...
	I0408 23:46:05.817994    7680 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0408 23:46:05.821887    7680 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 23:46:05.824808    7680 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0408 23:46:05.827473    7680 out.go:177]   - MINIKUBE_LOCATION=20501
	I0408 23:46:05.829945    7680 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 23:46:05.834193    7680 driver.go:404] Setting default libvirt URI to qemu:///system
	I0408 23:46:11.014871    7680 out.go:177] * Using the hyperv driver based on user configuration
	I0408 23:46:11.018348    7680 start.go:297] selected driver: hyperv
	I0408 23:46:11.018348    7680 start.go:901] validating driver "hyperv" against <nil>
	I0408 23:46:11.018348    7680 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 23:46:11.072670    7680 start_flags.go:311] no existing cluster config was found, will generate one from the flags 
	I0408 23:46:11.073693    7680 start_flags.go:975] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 23:46:11.073693    7680 cni.go:84] Creating CNI manager for ""
	I0408 23:46:11.073693    7680 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0408 23:46:11.073693    7680 start_flags.go:320] Found "CNI" CNI - setting NetworkPlugin=cni
	I0408 23:46:11.074783    7680 start.go:340] cluster config:
	{Name:ha-061400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-061400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 23:46:11.074860    7680 iso.go:125] acquiring lock: {Name:mk49322cc4182124f5e9cd1631076166a7ff463d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 23:46:11.079661    7680 out.go:177] * Starting "ha-061400" primary control-plane node in "ha-061400" cluster
	I0408 23:46:11.083186    7680 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0408 23:46:11.083327    7680 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
	I0408 23:46:11.083327    7680 cache.go:56] Caching tarball of preloaded images
	I0408 23:46:11.083327    7680 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0408 23:46:11.083989    7680 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0408 23:46:11.083989    7680 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\config.json ...
	I0408 23:46:11.084793    7680 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\config.json: {Name:mk1cc615eb76a4f9e67628aefb51723da50e1159 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 23:46:11.085897    7680 start.go:360] acquireMachinesLock for ha-061400: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 23:46:11.085897    7680 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-061400"
	I0408 23:46:11.086566    7680 start.go:93] Provisioning new machine with config: &{Name:ha-061400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName
:ha-061400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 23:46:11.086566    7680 start.go:125] createHost starting for "" (driver="hyperv")
	I0408 23:46:11.090881    7680 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 23:46:11.091835    7680 start.go:159] libmachine.API.Create for "ha-061400" (driver="hyperv")
	I0408 23:46:11.091835    7680 client.go:168] LocalClient.Create starting
	I0408 23:46:11.091835    7680 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0408 23:46:11.091835    7680 main.go:141] libmachine: Decoding PEM data...
	I0408 23:46:11.091835    7680 main.go:141] libmachine: Parsing certificate...
	I0408 23:46:11.091835    7680 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0408 23:46:11.091835    7680 main.go:141] libmachine: Decoding PEM data...
	I0408 23:46:11.091835    7680 main.go:141] libmachine: Parsing certificate...
	I0408 23:46:11.093385    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0408 23:46:13.111509    7680 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0408 23:46:13.111509    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:46:13.111617    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0408 23:46:14.779011    7680 main.go:141] libmachine: [stdout =====>] : False
	
	I0408 23:46:14.779585    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:46:14.779585    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0408 23:46:16.204660    7680 main.go:141] libmachine: [stdout =====>] : True
	
	I0408 23:46:16.204660    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:46:16.204660    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0408 23:46:19.720271    7680 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0408 23:46:19.720271    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:46:19.723242    7680 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0408 23:46:20.202897    7680 main.go:141] libmachine: Creating SSH key...
	I0408 23:46:20.609639    7680 main.go:141] libmachine: Creating VM...
	I0408 23:46:20.609639    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0408 23:46:23.422179    7680 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0408 23:46:23.422179    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:46:23.422936    7680 main.go:141] libmachine: Using switch "Default Switch"
	I0408 23:46:23.422995    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0408 23:46:25.096040    7680 main.go:141] libmachine: [stdout =====>] : True
	
	I0408 23:46:25.096189    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:46:25.096189    7680 main.go:141] libmachine: Creating VHD
	I0408 23:46:25.096189    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400\fixed.vhd' -SizeBytes 10MB -Fixed
	I0408 23:46:28.788861    7680 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 394A1494-325F-4CA9-A009-3434592A9134
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0408 23:46:28.788861    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:46:28.789029    7680 main.go:141] libmachine: Writing magic tar header
	I0408 23:46:28.789133    7680 main.go:141] libmachine: Writing SSH key tar header
	I0408 23:46:28.801281    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400\disk.vhd' -VHDType Dynamic -DeleteSource
	I0408 23:46:31.941136    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:46:31.941250    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:46:31.941337    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400\disk.vhd' -SizeBytes 20000MB
	I0408 23:46:34.497903    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:46:34.498610    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:46:34.498610    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-061400 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0408 23:46:38.061855    7680 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-061400 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0408 23:46:38.062960    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:46:38.063063    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-061400 -DynamicMemoryEnabled $false
	I0408 23:46:40.300145    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:46:40.300999    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:46:40.301101    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-061400 -Count 2
	I0408 23:46:42.532653    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:46:42.532653    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:46:42.533293    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-061400 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400\boot2docker.iso'
	I0408 23:46:45.113692    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:46:45.113762    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:46:45.113762    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-061400 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400\disk.vhd'
	I0408 23:46:47.702557    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:46:47.702557    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:46:47.703111    7680 main.go:141] libmachine: Starting VM...
	I0408 23:46:47.703149    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-061400
	I0408 23:46:50.748534    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:46:50.748868    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:46:50.748868    7680 main.go:141] libmachine: Waiting for host to start...
	I0408 23:46:50.748990    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:46:52.997942    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:46:52.998052    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:46:52.998052    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:46:55.504673    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:46:55.504673    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:46:56.505969    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:46:58.750675    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:46:58.750675    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:46:58.750675    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:47:01.261071    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:47:01.261489    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:02.261921    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:47:04.500047    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:47:04.500047    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:04.500047    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:47:06.994730    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:47:06.994730    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:07.994795    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:47:10.229017    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:47:10.229017    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:10.229924    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:47:12.806708    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:47:12.806766    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:13.807273    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:47:16.051095    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:47:16.052102    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:16.052102    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:47:18.567166    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:47:18.567166    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:18.567166    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:47:20.676535    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:47:20.676726    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:20.676726    7680 machine.go:93] provisionDockerMachine start ...
	I0408 23:47:20.676726    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:47:22.816637    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:47:22.817119    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:22.817119    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:47:25.281340    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:47:25.282178    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:25.288162    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:47:25.302648    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.119.206 22 <nil> <nil>}
	I0408 23:47:25.302715    7680 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 23:47:25.426721    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0408 23:47:25.426829    7680 buildroot.go:166] provisioning hostname "ha-061400"
	I0408 23:47:25.426829    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:47:27.518057    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:47:27.518057    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:27.518134    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:47:30.022921    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:47:30.022921    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:30.027478    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:47:30.028276    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.119.206 22 <nil> <nil>}
	I0408 23:47:30.028276    7680 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-061400 && echo "ha-061400" | sudo tee /etc/hostname
	I0408 23:47:30.193197    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-061400
	
	I0408 23:47:30.193197    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:47:32.280966    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:47:32.281290    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:32.281290    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:47:34.743525    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:47:34.743525    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:34.749367    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:47:34.750082    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.119.206 22 <nil> <nil>}
	I0408 23:47:34.750082    7680 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-061400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-061400/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-061400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 23:47:34.888358    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 23:47:34.888420    7680 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0408 23:47:34.888484    7680 buildroot.go:174] setting up certificates
	I0408 23:47:34.888576    7680 provision.go:84] configureAuth start
	I0408 23:47:34.888676    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:47:36.946744    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:47:36.947749    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:36.947787    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:47:39.460615    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:47:39.462061    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:39.462151    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:47:41.500916    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:47:41.500967    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:41.500967    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:47:43.966053    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:47:43.966053    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:43.966260    7680 provision.go:143] copyHostCerts
	I0408 23:47:43.966429    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0408 23:47:43.966657    7680 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0408 23:47:43.966751    7680 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0408 23:47:43.967202    7680 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0408 23:47:43.968669    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0408 23:47:43.968956    7680 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0408 23:47:43.969025    7680 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0408 23:47:43.969383    7680 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0408 23:47:43.970587    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0408 23:47:43.970844    7680 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0408 23:47:43.970949    7680 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0408 23:47:43.971370    7680 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0408 23:47:43.972256    7680 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-061400 san=[127.0.0.1 192.168.119.206 ha-061400 localhost minikube]
	I0408 23:47:44.157929    7680 provision.go:177] copyRemoteCerts
	I0408 23:47:44.169937    7680 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 23:47:44.169937    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:47:46.225885    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:47:46.225885    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:46.226514    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:47:48.729822    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:47:48.730848    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:48.731389    7680 sshutil.go:53] new ssh client: &{IP:192.168.119.206 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400\id_rsa Username:docker}
	I0408 23:47:48.848305    7680 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6783065s)
	I0408 23:47:48.848305    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0408 23:47:48.848678    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0408 23:47:48.894059    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0408 23:47:48.894086    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0408 23:47:48.935927    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0408 23:47:48.936311    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0408 23:47:48.976669    7680 provision.go:87] duration metric: took 14.0878196s to configureAuth
	I0408 23:47:48.976669    7680 buildroot.go:189] setting minikube options for container-runtime
	I0408 23:47:48.976925    7680 config.go:182] Loaded profile config "ha-061400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0408 23:47:48.976925    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:47:51.123956    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:47:51.124252    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:51.124252    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:47:53.652413    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:47:53.652413    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:53.658532    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:47:53.659297    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.119.206 22 <nil> <nil>}
	I0408 23:47:53.659297    7680 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0408 23:47:53.790134    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0408 23:47:53.790134    7680 buildroot.go:70] root file system type: tmpfs
	I0408 23:47:53.790362    7680 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0408 23:47:53.790440    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:47:55.862317    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:47:55.862405    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:55.862405    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:47:58.349515    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:47:58.350307    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:58.356398    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:47:58.357092    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.119.206 22 <nil> <nil>}
	I0408 23:47:58.357092    7680 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0408 23:47:58.522869    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0408 23:47:58.523419    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:48:00.661956    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:48:00.661956    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:48:00.663127    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:48:03.201659    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:48:03.201936    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:48:03.208215    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:48:03.208367    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.119.206 22 <nil> <nil>}
	I0408 23:48:03.208367    7680 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0408 23:48:05.435650    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0408 23:48:05.435650    7680 machine.go:96] duration metric: took 44.7583375s to provisionDockerMachine
	I0408 23:48:05.436221    7680 client.go:171] duration metric: took 1m54.3428816s to LocalClient.Create
	I0408 23:48:05.436271    7680 start.go:167] duration metric: took 1m54.3428816s to libmachine.API.Create "ha-061400"
	I0408 23:48:05.436345    7680 start.go:293] postStartSetup for "ha-061400" (driver="hyperv")
	I0408 23:48:05.436345    7680 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 23:48:05.447627    7680 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 23:48:05.447627    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:48:07.466827    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:48:07.466827    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:48:07.466827    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:48:09.952018    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:48:09.952018    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:48:09.952185    7680 sshutil.go:53] new ssh client: &{IP:192.168.119.206 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400\id_rsa Username:docker}
	I0408 23:48:10.060611    7680 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6128708s)
	I0408 23:48:10.072338    7680 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 23:48:10.078585    7680 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 23:48:10.078585    7680 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0408 23:48:10.079263    7680 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0408 23:48:10.080154    7680 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> 98642.pem in /etc/ssl/certs
	I0408 23:48:10.080225    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> /etc/ssl/certs/98642.pem
	I0408 23:48:10.090789    7680 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 23:48:10.111243    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem --> /etc/ssl/certs/98642.pem (1708 bytes)
	I0408 23:48:10.155509    7680 start.go:296] duration metric: took 4.7191017s for postStartSetup
	I0408 23:48:10.159178    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:48:12.218775    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:48:12.218775    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:48:12.219798    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:48:14.693154    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:48:14.693154    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:48:14.694420    7680 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\config.json ...
	I0408 23:48:14.698302    7680 start.go:128] duration metric: took 2m3.6101097s to createHost
	I0408 23:48:14.698603    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:48:16.721165    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:48:16.721165    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:48:16.721499    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:48:19.184134    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:48:19.184651    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:48:19.191131    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:48:19.191932    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.119.206 22 <nil> <nil>}
	I0408 23:48:19.191932    7680 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0408 23:48:19.322730    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744156099.349496772
	
	I0408 23:48:19.322819    7680 fix.go:216] guest clock: 1744156099.349496772
	I0408 23:48:19.322819    7680 fix.go:229] Guest: 2025-04-08 23:48:19.349496772 +0000 UTC Remote: 2025-04-08 23:48:14.6984524 +0000 UTC m=+129.066470901 (delta=4.651044372s)
	I0408 23:48:19.323027    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:48:21.398377    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:48:21.398377    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:48:21.399311    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:48:23.815997    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:48:23.815997    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:48:23.823228    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:48:23.823970    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.119.206 22 <nil> <nil>}
	I0408 23:48:23.823970    7680 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1744156099
	I0408 23:48:23.972884    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr  8 23:48:19 UTC 2025
	
	I0408 23:48:23.972884    7680 fix.go:236] clock set: Tue Apr  8 23:48:19 UTC 2025
	 (err=<nil>)
	I0408 23:48:23.972884    7680 start.go:83] releasing machines lock for "ha-061400", held for 2m12.8852393s
	I0408 23:48:23.972884    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:48:26.028915    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:48:26.028915    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:48:26.028915    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:48:28.465373    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:48:28.465373    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:48:28.469333    7680 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0408 23:48:28.469404    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:48:28.483440    7680 ssh_runner.go:195] Run: cat /version.json
	I0408 23:48:28.483440    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:48:30.708233    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:48:30.708233    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:48:30.708233    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:48:30.722675    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:48:30.723580    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:48:30.723580    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:48:33.317056    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:48:33.317634    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:48:33.317634    7680 sshutil.go:53] new ssh client: &{IP:192.168.119.206 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400\id_rsa Username:docker}
	I0408 23:48:33.343806    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:48:33.343806    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:48:33.343806    7680 sshutil.go:53] new ssh client: &{IP:192.168.119.206 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400\id_rsa Username:docker}
	I0408 23:48:33.418288    7680 ssh_runner.go:235] Completed: cat /version.json: (4.9347833s)
	I0408 23:48:33.431856    7680 ssh_runner.go:195] Run: systemctl --version
	I0408 23:48:33.437011    7680 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9676127s)
	W0408 23:48:33.437011    7680 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0408 23:48:33.454283    7680 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 23:48:33.462481    7680 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 23:48:33.472801    7680 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 23:48:33.503373    7680 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 23:48:33.503373    7680 start.go:495] detecting cgroup driver to use...
	I0408 23:48:33.503373    7680 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 23:48:33.550859    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	W0408 23:48:33.568525    7680 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0408 23:48:33.568601    7680 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0408 23:48:33.582205    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0408 23:48:33.601734    7680 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0408 23:48:33.612459    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0408 23:48:33.641890    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 23:48:33.673820    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0408 23:48:33.704040    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 23:48:33.732538    7680 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 23:48:33.763459    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0408 23:48:33.792444    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0408 23:48:33.823010    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0408 23:48:33.856879    7680 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 23:48:33.873481    7680 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 23:48:33.884201    7680 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 23:48:33.921136    7680 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 23:48:33.948819    7680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:48:34.159015    7680 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0408 23:48:34.188652    7680 start.go:495] detecting cgroup driver to use...
	I0408 23:48:34.201484    7680 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0408 23:48:34.237127    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 23:48:34.268443    7680 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 23:48:34.306555    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 23:48:34.341974    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0408 23:48:34.376665    7680 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0408 23:48:34.442336    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0408 23:48:34.464787    7680 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 23:48:34.513002    7680 ssh_runner.go:195] Run: which cri-dockerd
	I0408 23:48:34.529599    7680 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0408 23:48:34.552405    7680 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0408 23:48:34.607713    7680 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0408 23:48:34.826450    7680 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0408 23:48:34.999269    7680 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0408 23:48:34.999704    7680 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0408 23:48:35.041706    7680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:48:35.253852    7680 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0408 23:48:37.883559    7680 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6294976s)
	I0408 23:48:37.895865    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0408 23:48:37.930543    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0408 23:48:37.961693    7680 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0408 23:48:38.176435    7680 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0408 23:48:38.390290    7680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:48:38.592435    7680 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0408 23:48:38.633001    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0408 23:48:38.669266    7680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:48:38.875755    7680 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0408 23:48:39.000905    7680 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0408 23:48:39.012336    7680 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0408 23:48:39.021472    7680 start.go:563] Will wait 60s for crictl version
	I0408 23:48:39.033128    7680 ssh_runner.go:195] Run: which crictl
	I0408 23:48:39.050297    7680 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 23:48:39.102468    7680 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0408 23:48:39.112381    7680 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0408 23:48:39.154499    7680 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0408 23:48:39.191772    7680 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 27.4.0 ...
	I0408 23:48:39.191950    7680 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0408 23:48:39.196205    7680 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0408 23:48:39.196205    7680 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0408 23:48:39.196205    7680 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0408 23:48:39.196205    7680 ip.go:211] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:f4:da:75 Flags:up|broadcast|multicast|running}
	I0408 23:48:39.198845    7680 ip.go:214] interface addr: fe80::e8ab:9cc6:22b1:a5fc/64
	I0408 23:48:39.198845    7680 ip.go:214] interface addr: 192.168.112.1/20
	I0408 23:48:39.209721    7680 ssh_runner.go:195] Run: grep 192.168.112.1	host.minikube.internal$ /etc/hosts
	I0408 23:48:39.214681    7680 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.112.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 23:48:39.250983    7680 kubeadm.go:883] updating cluster {Name:ha-061400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-061400 Namespac
e:default APIServerHAVIP:192.168.127.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.119.206 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 23:48:39.251363    7680 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0408 23:48:39.259754    7680 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0408 23:48:39.281668    7680 docker.go:689] Got preloaded images: 
	I0408 23:48:39.281668    7680 docker.go:695] registry.k8s.io/kube-apiserver:v1.32.2 wasn't preloaded
	I0408 23:48:39.294343    7680 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0408 23:48:39.322987    7680 ssh_runner.go:195] Run: which lz4
	I0408 23:48:39.329635    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0408 23:48:39.344116    7680 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0408 23:48:39.353323    7680 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 23:48:39.353323    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (349803115 bytes)
	I0408 23:48:41.140315    7680 docker.go:653] duration metric: took 1.8103388s to copy over tarball
	I0408 23:48:41.151698    7680 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 23:48:49.871956    7680 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.7201436s)
	I0408 23:48:49.871956    7680 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 23:48:49.933325    7680 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0408 23:48:49.951679    7680 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0408 23:48:49.992466    7680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:48:50.233880    7680 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0408 23:48:53.353436    7680 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.1194364s)
	I0408 23:48:53.364472    7680 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0408 23:48:53.395672    7680 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.2
	registry.k8s.io/kube-controller-manager:v1.32.2
	registry.k8s.io/kube-scheduler:v1.32.2
	registry.k8s.io/kube-proxy:v1.32.2
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0408 23:48:53.395807    7680 cache_images.go:84] Images are preloaded, skipping loading
	I0408 23:48:53.395866    7680 kubeadm.go:934] updating node { 192.168.119.206 8443 v1.32.2 docker true true} ...
	I0408 23:48:53.395933    7680 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-061400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.119.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-061400 Namespace:default APIServerHAVIP:192.168.127.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 23:48:53.405093    7680 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0408 23:48:53.465284    7680 cni.go:84] Creating CNI manager for ""
	I0408 23:48:53.465353    7680 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0408 23:48:53.465401    7680 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 23:48:53.465452    7680 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.119.206 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-061400 NodeName:ha-061400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.119.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.119.206 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 23:48:53.465711    7680 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.119.206
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-061400"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.119.206"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.119.206"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 23:48:53.465816    7680 kube-vip.go:115] generating kube-vip config ...
	I0408 23:48:53.477256    7680 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0408 23:48:53.504883    7680 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0408 23:48:53.505049    7680 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.127.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0408 23:48:53.516288    7680 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0408 23:48:53.529501    7680 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 23:48:53.540318    7680 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0408 23:48:53.556320    7680 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0408 23:48:53.588746    7680 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 23:48:53.621791    7680 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2296 bytes)
	I0408 23:48:53.657096    7680 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1449 bytes)
	I0408 23:48:53.704555    7680 ssh_runner.go:195] Run: grep 192.168.127.254	control-plane.minikube.internal$ /etc/hosts
	I0408 23:48:53.716343    7680 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.127.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 23:48:53.745142    7680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:48:53.933155    7680 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 23:48:53.962295    7680 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400 for IP: 192.168.119.206
	I0408 23:48:53.962295    7680 certs.go:194] generating shared ca certs ...
	I0408 23:48:53.962357    7680 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 23:48:53.963446    7680 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0408 23:48:53.963923    7680 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0408 23:48:53.964229    7680 certs.go:256] generating profile certs ...
	I0408 23:48:53.965024    7680 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\client.key
	I0408 23:48:53.965309    7680 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\client.crt with IP's: []
	I0408 23:48:54.258874    7680 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\client.crt ...
	I0408 23:48:54.258874    7680 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\client.crt: {Name:mke2bc007cddace728408cfa573486bd1946f7c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 23:48:54.260517    7680 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\client.key ...
	I0408 23:48:54.261162    7680 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\client.key: {Name:mk9ee30629538570a76961b95a9be009f3ff090b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 23:48:54.262652    7680 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key.fe0ed964
	I0408 23:48:54.262652    7680 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt.fe0ed964 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.119.206 192.168.127.254]
	I0408 23:48:54.864367    7680 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt.fe0ed964 ...
	I0408 23:48:54.864367    7680 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt.fe0ed964: {Name:mk154aafd603f4e1a5f8bfb5dc76325526227ffe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 23:48:54.865821    7680 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key.fe0ed964 ...
	I0408 23:48:54.865821    7680 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key.fe0ed964: {Name:mk32887fc5c7c23fab60f22f907cc887cf8f8d4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 23:48:54.866158    7680 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt.fe0ed964 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt
	I0408 23:48:54.887265    7680 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key.fe0ed964 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key
	I0408 23:48:54.888951    7680 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\proxy-client.key
	I0408 23:48:54.889061    7680 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\proxy-client.crt with IP's: []
	I0408 23:48:55.335597    7680 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\proxy-client.crt ...
	I0408 23:48:55.335597    7680 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\proxy-client.crt: {Name:mk3f541cb97fbe77652a4540a6c8315ef59d8cdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 23:48:55.337926    7680 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\proxy-client.key ...
	I0408 23:48:55.337926    7680 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\proxy-client.key: {Name:mk7e3ee8dd9016b2873628e06d7b062b75eebac7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 23:48:55.339800    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0408 23:48:55.340076    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0408 23:48:55.340224    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0408 23:48:55.340404    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0408 23:48:55.340541    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0408 23:48:55.340712    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0408 23:48:55.340832    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0408 23:48:55.352112    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0408 23:48:55.353643    7680 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9864.pem (1338 bytes)
	W0408 23:48:55.354147    7680 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9864_empty.pem, impossibly tiny 0 bytes
	I0408 23:48:55.354147    7680 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0408 23:48:55.354564    7680 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0408 23:48:55.354564    7680 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0408 23:48:55.354564    7680 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0408 23:48:55.354564    7680 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem (1708 bytes)
	I0408 23:48:55.355872    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0408 23:48:55.356307    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9864.pem -> /usr/share/ca-certificates/9864.pem
	I0408 23:48:55.356500    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> /usr/share/ca-certificates/98642.pem
	I0408 23:48:55.357777    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 23:48:55.403173    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 23:48:55.448232    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 23:48:55.491432    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0408 23:48:55.532048    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0408 23:48:55.573677    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0408 23:48:55.617908    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 23:48:55.661926    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0408 23:48:55.707719    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 23:48:55.753804    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9864.pem --> /usr/share/ca-certificates/9864.pem (1338 bytes)
	I0408 23:48:55.805108    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem --> /usr/share/ca-certificates/98642.pem (1708 bytes)
	I0408 23:48:55.858228    7680 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0408 23:48:55.900915    7680 ssh_runner.go:195] Run: openssl version
	I0408 23:48:55.919761    7680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 23:48:55.949886    7680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 23:48:55.956943    7680 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0408 23:48:55.967634    7680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 23:48:55.989262    7680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 23:48:56.016429    7680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9864.pem && ln -fs /usr/share/ca-certificates/9864.pem /etc/ssl/certs/9864.pem"
	I0408 23:48:56.045374    7680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9864.pem
	I0408 23:48:56.051787    7680 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 23:04 /usr/share/ca-certificates/9864.pem
	I0408 23:48:56.062427    7680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9864.pem
	I0408 23:48:56.082891    7680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9864.pem /etc/ssl/certs/51391683.0"
	I0408 23:48:56.112156    7680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98642.pem && ln -fs /usr/share/ca-certificates/98642.pem /etc/ssl/certs/98642.pem"
	I0408 23:48:56.142896    7680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98642.pem
	I0408 23:48:56.149490    7680 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 23:04 /usr/share/ca-certificates/98642.pem
	I0408 23:48:56.159595    7680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98642.pem
	I0408 23:48:56.178928    7680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/98642.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 23:48:56.210170    7680 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 23:48:56.216412    7680 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0408 23:48:56.216849    7680 kubeadm.go:392] StartCluster: {Name:ha-061400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-061400 Namespace:d
efault APIServerHAVIP:192.168.127.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.119.206 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 23:48:56.226234    7680 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0408 23:48:56.258393    7680 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0408 23:48:56.286269    7680 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 23:48:56.315318    7680 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 23:48:56.331828    7680 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 23:48:56.331877    7680 kubeadm.go:157] found existing configuration files:
	
	I0408 23:48:56.343082    7680 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 23:48:56.360774    7680 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 23:48:56.371868    7680 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 23:48:56.400765    7680 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 23:48:56.415776    7680 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 23:48:56.427526    7680 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 23:48:56.457921    7680 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 23:48:56.482073    7680 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 23:48:56.492371    7680 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 23:48:56.519682    7680 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 23:48:56.534855    7680 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 23:48:56.546009    7680 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 23:48:56.566218    7680 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 23:48:57.030140    7680 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 23:49:11.683504    7680 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0408 23:49:11.683714    7680 kubeadm.go:310] [preflight] Running pre-flight checks
	I0408 23:49:11.683925    7680 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 23:49:11.684358    7680 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 23:49:11.684592    7680 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0408 23:49:11.684897    7680 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 23:49:11.690342    7680 out.go:235]   - Generating certificates and keys ...
	I0408 23:49:11.690342    7680 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0408 23:49:11.690342    7680 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0408 23:49:11.691079    7680 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0408 23:49:11.691205    7680 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0408 23:49:11.691205    7680 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0408 23:49:11.691205    7680 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0408 23:49:11.691733    7680 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0408 23:49:11.692018    7680 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-061400 localhost] and IPs [192.168.119.206 127.0.0.1 ::1]
	I0408 23:49:11.692018    7680 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0408 23:49:11.692018    7680 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-061400 localhost] and IPs [192.168.119.206 127.0.0.1 ::1]
	I0408 23:49:11.692637    7680 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0408 23:49:11.692974    7680 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0408 23:49:11.692974    7680 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0408 23:49:11.692974    7680 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 23:49:11.692974    7680 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 23:49:11.693587    7680 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0408 23:49:11.693587    7680 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 23:49:11.693854    7680 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 23:49:11.693854    7680 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 23:49:11.693854    7680 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 23:49:11.694396    7680 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 23:49:11.697634    7680 out.go:235]   - Booting up control plane ...
	I0408 23:49:11.698218    7680 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 23:49:11.698218    7680 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 23:49:11.698218    7680 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 23:49:11.698218    7680 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 23:49:11.698897    7680 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 23:49:11.699159    7680 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0408 23:49:11.699557    7680 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0408 23:49:11.699928    7680 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0408 23:49:11.700093    7680 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002145099s
	I0408 23:49:11.700271    7680 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0408 23:49:11.700418    7680 kubeadm.go:310] [api-check] The API server is healthy after 8.743297649s
	I0408 23:49:11.700745    7680 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0408 23:49:11.701110    7680 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0408 23:49:11.701259    7680 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0408 23:49:11.701734    7680 kubeadm.go:310] [mark-control-plane] Marking the node ha-061400 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0408 23:49:11.701839    7680 kubeadm.go:310] [bootstrap-token] Using token: 1oehw4.v0ilnzd04t5ken5b
	I0408 23:49:11.704323    7680 out.go:235]   - Configuring RBAC rules ...
	I0408 23:49:11.704717    7680 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0408 23:49:11.704717    7680 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0408 23:49:11.704717    7680 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0408 23:49:11.705452    7680 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0408 23:49:11.705714    7680 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0408 23:49:11.705714    7680 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0408 23:49:11.706246    7680 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0408 23:49:11.706364    7680 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0408 23:49:11.706519    7680 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0408 23:49:11.706519    7680 kubeadm.go:310] 
	I0408 23:49:11.706625    7680 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0408 23:49:11.706625    7680 kubeadm.go:310] 
	I0408 23:49:11.706625    7680 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0408 23:49:11.706625    7680 kubeadm.go:310] 
	I0408 23:49:11.706625    7680 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0408 23:49:11.706625    7680 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0408 23:49:11.707255    7680 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0408 23:49:11.707255    7680 kubeadm.go:310] 
	I0408 23:49:11.707255    7680 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0408 23:49:11.707255    7680 kubeadm.go:310] 
	I0408 23:49:11.707255    7680 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0408 23:49:11.707255    7680 kubeadm.go:310] 
	I0408 23:49:11.707255    7680 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0408 23:49:11.707255    7680 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0408 23:49:11.707957    7680 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0408 23:49:11.707957    7680 kubeadm.go:310] 
	I0408 23:49:11.708070    7680 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0408 23:49:11.708070    7680 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0408 23:49:11.708070    7680 kubeadm.go:310] 
	I0408 23:49:11.708070    7680 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1oehw4.v0ilnzd04t5ken5b \
	I0408 23:49:11.708628    7680 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa5a4dda055a1a4ae6c54f5bc7c6626b2903d2da5858116de66a68e5e1fbf334 \
	I0408 23:49:11.708702    7680 kubeadm.go:310] 	--control-plane 
	I0408 23:49:11.708747    7680 kubeadm.go:310] 
	I0408 23:49:11.708837    7680 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0408 23:49:11.708837    7680 kubeadm.go:310] 
	I0408 23:49:11.708944    7680 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1oehw4.v0ilnzd04t5ken5b \
	I0408 23:49:11.709205    7680 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa5a4dda055a1a4ae6c54f5bc7c6626b2903d2da5858116de66a68e5e1fbf334 
	I0408 23:49:11.709205    7680 cni.go:84] Creating CNI manager for ""
	I0408 23:49:11.709205    7680 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0408 23:49:11.712548    7680 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0408 23:49:11.724293    7680 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0408 23:49:11.733136    7680 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0408 23:49:11.733136    7680 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0408 23:49:11.775152    7680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0408 23:49:12.523033    7680 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 23:49:12.537006    7680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 23:49:12.537006    7680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-061400 minikube.k8s.io/updated_at=2025_04_08T23_49_12_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=fd2f4c3eba2bd452b5997c855e28d0966165ba83 minikube.k8s.io/name=ha-061400 minikube.k8s.io/primary=true
	I0408 23:49:12.553298    7680 ops.go:34] apiserver oom_adj: -16
	I0408 23:49:12.770225    7680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 23:49:13.272611    7680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 23:49:13.770168    7680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 23:49:14.268831    7680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 23:49:14.770096    7680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 23:49:14.957462    7680 kubeadm.go:1113] duration metric: took 2.4341643s to wait for elevateKubeSystemPrivileges
	I0408 23:49:14.957634    7680 kubeadm.go:394] duration metric: took 18.7405399s to StartCluster
	I0408 23:49:14.957787    7680 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 23:49:14.958089    7680 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0408 23:49:14.959926    7680 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 23:49:14.961014    7680 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0408 23:49:14.961014    7680 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.119.206 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 23:49:14.961014    7680 start.go:241] waiting for startup goroutines ...
	I0408 23:49:14.961014    7680 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0408 23:49:14.961689    7680 addons.go:69] Setting storage-provisioner=true in profile "ha-061400"
	I0408 23:49:14.961689    7680 addons.go:69] Setting default-storageclass=true in profile "ha-061400"
	I0408 23:49:14.961796    7680 addons.go:238] Setting addon storage-provisioner=true in "ha-061400"
	I0408 23:49:14.961796    7680 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-061400"
	I0408 23:49:14.961796    7680 host.go:66] Checking if "ha-061400" exists ...
	I0408 23:49:14.961796    7680 config.go:182] Loaded profile config "ha-061400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0408 23:49:14.961796    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:49:14.961796    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:49:15.179113    7680 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.112.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0408 23:49:15.545714    7680 start.go:971] {"host.minikube.internal": 192.168.112.1} host record injected into CoreDNS's ConfigMap
	I0408 23:49:17.281961    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:49:17.282912    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:49:17.283023    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:49:17.283147    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:49:17.284187    7680 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0408 23:49:17.284938    7680 kapi.go:59] client config for ha-061400: &rest.Config{Host:"https://192.168.127.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-061400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-061400\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2809400), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0408 23:49:17.285909    7680 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 23:49:17.286944    7680 cert_rotation.go:140] Starting client certificate rotation controller
	I0408 23:49:17.287048    7680 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0408 23:49:17.287048    7680 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0408 23:49:17.287048    7680 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0408 23:49:17.287048    7680 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0408 23:49:17.288581    7680 addons.go:238] Setting addon default-storageclass=true in "ha-061400"
	I0408 23:49:17.288581    7680 host.go:66] Checking if "ha-061400" exists ...
	I0408 23:49:17.288581    7680 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 23:49:17.288789    7680 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 23:49:17.288962    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:49:17.289848    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:49:19.810344    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:49:19.810344    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:49:19.810344    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:49:19.864377    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:49:19.864377    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:49:19.864377    7680 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 23:49:19.864377    7680 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 23:49:19.864964    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:49:22.108017    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:49:22.108017    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:49:22.108017    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:49:22.551998    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:49:22.552081    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:49:22.552510    7680 sshutil.go:53] new ssh client: &{IP:192.168.119.206 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400\id_rsa Username:docker}
	I0408 23:49:22.703821    7680 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 23:49:24.735038    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:49:24.736053    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:49:24.736177    7680 sshutil.go:53] new ssh client: &{IP:192.168.119.206 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400\id_rsa Username:docker}
	I0408 23:49:24.863889    7680 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 23:49:25.013362    7680 round_trippers.go:470] GET https://192.168.127.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0408 23:49:25.013362    7680 round_trippers.go:476] Request Headers:
	I0408 23:49:25.013362    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:49:25.013362    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:49:25.030435    7680 round_trippers.go:581] Response Status: 200 OK in 17 milliseconds
	I0408 23:49:25.030978    7680 round_trippers.go:470] PUT https://192.168.127.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0408 23:49:25.030978    7680 round_trippers.go:476] Request Headers:
	I0408 23:49:25.030978    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:49:25.030978    7680 round_trippers.go:480]     Content-Type: application/vnd.kubernetes.protobuf
	I0408 23:49:25.030978    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:49:25.056875    7680 round_trippers.go:581] Response Status: 200 OK in 25 milliseconds
	I0408 23:49:25.064913    7680 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0408 23:49:25.067861    7680 addons.go:514] duration metric: took 10.106714s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0408 23:49:25.067861    7680 start.go:246] waiting for cluster config update ...
	I0408 23:49:25.067861    7680 start.go:255] writing updated cluster config ...
	I0408 23:49:25.071572    7680 out.go:201] 
	I0408 23:49:25.086301    7680 config.go:182] Loaded profile config "ha-061400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0408 23:49:25.086504    7680 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\config.json ...
	I0408 23:49:25.094561    7680 out.go:177] * Starting "ha-061400-m02" control-plane node in "ha-061400" cluster
	I0408 23:49:25.100604    7680 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0408 23:49:25.100604    7680 cache.go:56] Caching tarball of preloaded images
	I0408 23:49:25.100604    7680 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0408 23:49:25.100604    7680 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0408 23:49:25.100604    7680 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\config.json ...
	I0408 23:49:25.105820    7680 start.go:360] acquireMachinesLock for ha-061400-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 23:49:25.105820    7680 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-061400-m02"
	I0408 23:49:25.106660    7680 start.go:93] Provisioning new machine with config: &{Name:ha-061400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName
:ha-061400 Namespace:default APIServerHAVIP:192.168.127.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.119.206 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 23:49:25.106660    7680 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0408 23:49:25.110298    7680 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 23:49:25.111132    7680 start.go:159] libmachine.API.Create for "ha-061400" (driver="hyperv")
	I0408 23:49:25.111194    7680 client.go:168] LocalClient.Create starting
	I0408 23:49:25.111418    7680 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0408 23:49:25.111418    7680 main.go:141] libmachine: Decoding PEM data...
	I0408 23:49:25.111872    7680 main.go:141] libmachine: Parsing certificate...
	I0408 23:49:25.112043    7680 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0408 23:49:25.112233    7680 main.go:141] libmachine: Decoding PEM data...
	I0408 23:49:25.112233    7680 main.go:141] libmachine: Parsing certificate...
	I0408 23:49:25.112233    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0408 23:49:26.936855    7680 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0408 23:49:26.936855    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:49:26.937643    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0408 23:49:28.638933    7680 main.go:141] libmachine: [stdout =====>] : False
	
	I0408 23:49:28.639295    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:49:28.639295    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0408 23:49:30.094362    7680 main.go:141] libmachine: [stdout =====>] : True
	
	I0408 23:49:30.094985    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:49:30.095069    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0408 23:49:33.636198    7680 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0408 23:49:33.637009    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:49:33.639567    7680 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0408 23:49:34.116352    7680 main.go:141] libmachine: Creating SSH key...
	I0408 23:49:34.453600    7680 main.go:141] libmachine: Creating VM...
	I0408 23:49:34.453600    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0408 23:49:37.254787    7680 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0408 23:49:37.255179    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:49:37.255179    7680 main.go:141] libmachine: Using switch "Default Switch"
	I0408 23:49:37.255287    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0408 23:49:39.035903    7680 main.go:141] libmachine: [stdout =====>] : True
	
	I0408 23:49:39.036099    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:49:39.036099    7680 main.go:141] libmachine: Creating VHD
	I0408 23:49:39.036099    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0408 23:49:42.893446    7680 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 8657F626-CBAE-4F1A-B23A-DAAD31A1A26E
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0408 23:49:42.893983    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:49:42.894261    7680 main.go:141] libmachine: Writing magic tar header
	I0408 23:49:42.894713    7680 main.go:141] libmachine: Writing SSH key tar header
	I0408 23:49:42.906778    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0408 23:49:46.032619    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:49:46.032619    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:49:46.032619    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m02\disk.vhd' -SizeBytes 20000MB
	I0408 23:49:48.562045    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:49:48.562181    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:49:48.562181    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-061400-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0408 23:49:52.144403    7680 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-061400-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0408 23:49:52.145409    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:49:52.145453    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-061400-m02 -DynamicMemoryEnabled $false
	I0408 23:49:54.396449    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:49:54.396449    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:49:54.396449    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-061400-m02 -Count 2
	I0408 23:49:56.534316    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:49:56.534462    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:49:56.534462    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-061400-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m02\boot2docker.iso'
	I0408 23:49:59.066462    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:49:59.066847    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:49:59.066847    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-061400-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m02\disk.vhd'
	I0408 23:50:01.663129    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:50:01.663322    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:01.663322    7680 main.go:141] libmachine: Starting VM...
	I0408 23:50:01.663322    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-061400-m02
	I0408 23:50:04.686552    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:50:04.687694    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:04.687694    7680 main.go:141] libmachine: Waiting for host to start...
	I0408 23:50:04.687694    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:50:06.932925    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:50:06.932925    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:06.932925    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:50:09.496559    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:50:09.496559    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:10.497407    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:50:12.728042    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:50:12.728042    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:12.728513    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:50:15.219276    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:50:15.219276    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:16.220503    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:50:18.411416    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:50:18.411648    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:18.411648    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:50:20.967316    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:50:20.967316    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:21.967639    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:50:24.253469    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:50:24.253469    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:24.253604    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:50:26.826934    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:50:26.827350    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:27.828496    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:50:30.069749    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:50:30.070701    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:30.070701    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:50:32.670259    7680 main.go:141] libmachine: [stdout =====>] : 192.168.118.215
	
	I0408 23:50:32.670259    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:32.670946    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:50:34.843064    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:50:34.843064    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:34.843804    7680 machine.go:93] provisionDockerMachine start ...
	I0408 23:50:34.843804    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:50:37.001022    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:50:37.001828    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:37.001828    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:50:39.560208    7680 main.go:141] libmachine: [stdout =====>] : 192.168.118.215
	
	I0408 23:50:39.560993    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:39.566823    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:50:39.581592    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.118.215 22 <nil> <nil>}
	I0408 23:50:39.581738    7680 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 23:50:39.717615    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0408 23:50:39.717615    7680 buildroot.go:166] provisioning hostname "ha-061400-m02"
	I0408 23:50:39.717615    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:50:41.897975    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:50:41.898315    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:41.898315    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:50:44.431012    7680 main.go:141] libmachine: [stdout =====>] : 192.168.118.215
	
	I0408 23:50:44.431012    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:44.438131    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:50:44.438245    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.118.215 22 <nil> <nil>}
	I0408 23:50:44.438841    7680 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-061400-m02 && echo "ha-061400-m02" | sudo tee /etc/hostname
	I0408 23:50:44.606274    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-061400-m02
	
	I0408 23:50:44.606274    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:50:46.692428    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:50:46.692532    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:46.692532    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:50:49.224140    7680 main.go:141] libmachine: [stdout =====>] : 192.168.118.215
	
	I0408 23:50:49.224140    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:49.231721    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:50:49.232272    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.118.215 22 <nil> <nil>}
	I0408 23:50:49.232354    7680 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-061400-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-061400-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-061400-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 23:50:49.398744    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 23:50:49.398949    7680 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0408 23:50:49.399084    7680 buildroot.go:174] setting up certificates
	I0408 23:50:49.399159    7680 provision.go:84] configureAuth start
	I0408 23:50:49.399276    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:50:51.551311    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:50:51.552323    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:51.552540    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:50:54.199621    7680 main.go:141] libmachine: [stdout =====>] : 192.168.118.215
	
	I0408 23:50:54.199621    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:54.199621    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:50:56.376626    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:50:56.376626    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:56.377257    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:50:58.926832    7680 main.go:141] libmachine: [stdout =====>] : 192.168.118.215
	
	I0408 23:50:58.926832    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:58.926832    7680 provision.go:143] copyHostCerts
	I0408 23:50:58.927024    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0408 23:50:58.927330    7680 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0408 23:50:58.927419    7680 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0408 23:50:58.927943    7680 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0408 23:50:58.929186    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0408 23:50:58.929464    7680 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0408 23:50:58.929464    7680 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0408 23:50:58.929851    7680 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0408 23:50:58.930960    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0408 23:50:58.931318    7680 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0408 23:50:58.931318    7680 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0408 23:50:58.931662    7680 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0408 23:50:58.932280    7680 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-061400-m02 san=[127.0.0.1 192.168.118.215 ha-061400-m02 localhost minikube]
	I0408 23:50:59.298698    7680 provision.go:177] copyRemoteCerts
	I0408 23:50:59.311822    7680 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 23:50:59.311822    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:51:01.413791    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:51:01.413791    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:01.413885    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:51:03.905233    7680 main.go:141] libmachine: [stdout =====>] : 192.168.118.215
	
	I0408 23:51:03.905233    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:03.905233    7680 sshutil.go:53] new ssh client: &{IP:192.168.118.215 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m02\id_rsa Username:docker}
	I0408 23:51:04.008672    7680 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6967878s)
	I0408 23:51:04.008672    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0408 23:51:04.009297    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0408 23:51:04.054930    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0408 23:51:04.054930    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0408 23:51:04.106383    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0408 23:51:04.107015    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 23:51:04.149801    7680 provision.go:87] duration metric: took 14.7504488s to configureAuth
	I0408 23:51:04.149801    7680 buildroot.go:189] setting minikube options for container-runtime
	I0408 23:51:04.149801    7680 config.go:182] Loaded profile config "ha-061400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0408 23:51:04.149801    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:51:06.260659    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:51:06.260659    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:06.260659    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:51:08.809271    7680 main.go:141] libmachine: [stdout =====>] : 192.168.118.215
	
	I0408 23:51:08.809271    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:08.815428    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:51:08.816103    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.118.215 22 <nil> <nil>}
	I0408 23:51:08.816103    7680 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0408 23:51:08.961881    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0408 23:51:08.961881    7680 buildroot.go:70] root file system type: tmpfs
	I0408 23:51:08.961881    7680 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0408 23:51:08.961881    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:51:11.080770    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:51:11.080830    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:11.080969    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:51:13.647629    7680 main.go:141] libmachine: [stdout =====>] : 192.168.118.215
	
	I0408 23:51:13.647629    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:13.655078    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:51:13.655838    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.118.215 22 <nil> <nil>}
	I0408 23:51:13.655838    7680 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.119.206"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0408 23:51:13.834752    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.119.206
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0408 23:51:13.834752    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:51:15.953152    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:51:15.953787    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:15.953905    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:51:18.454760    7680 main.go:141] libmachine: [stdout =====>] : 192.168.118.215
	
	I0408 23:51:18.454760    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:18.461020    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:51:18.461172    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.118.215 22 <nil> <nil>}
	I0408 23:51:18.461172    7680 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0408 23:51:20.704335    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0408 23:51:20.704445    7680 machine.go:96] duration metric: took 45.860001s to provisionDockerMachine
	I0408 23:51:20.704445    7680 client.go:171] duration metric: took 1m55.5917338s to LocalClient.Create
	I0408 23:51:20.704507    7680 start.go:167] duration metric: took 1m55.5926909s to libmachine.API.Create "ha-061400"
	I0408 23:51:20.704586    7680 start.go:293] postStartSetup for "ha-061400-m02" (driver="hyperv")
	I0408 23:51:20.704608    7680 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 23:51:20.717095    7680 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 23:51:20.717095    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:51:22.822959    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:51:22.823522    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:22.823522    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:51:25.375870    7680 main.go:141] libmachine: [stdout =====>] : 192.168.118.215
	
	I0408 23:51:25.376714    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:25.376714    7680 sshutil.go:53] new ssh client: &{IP:192.168.118.215 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m02\id_rsa Username:docker}
	I0408 23:51:25.486134    7680 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7688944s)
	I0408 23:51:25.497212    7680 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 23:51:25.504554    7680 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 23:51:25.504554    7680 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0408 23:51:25.505065    7680 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0408 23:51:25.505459    7680 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> 98642.pem in /etc/ssl/certs
	I0408 23:51:25.505459    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> /etc/ssl/certs/98642.pem
	I0408 23:51:25.517073    7680 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 23:51:25.535672    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem --> /etc/ssl/certs/98642.pem (1708 bytes)
	I0408 23:51:25.581421    7680 start.go:296] duration metric: took 4.8767484s for postStartSetup
	I0408 23:51:25.584475    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:51:27.654042    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:51:27.654042    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:27.654731    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:51:30.221944    7680 main.go:141] libmachine: [stdout =====>] : 192.168.118.215
	
	I0408 23:51:30.222279    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:30.222375    7680 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\config.json ...
	I0408 23:51:30.225386    7680 start.go:128] duration metric: took 2m5.1170821s to createHost
	I0408 23:51:30.225386    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:51:32.306219    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:51:32.306219    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:32.306219    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:51:34.793264    7680 main.go:141] libmachine: [stdout =====>] : 192.168.118.215
	
	I0408 23:51:34.794046    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:34.799581    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:51:34.800164    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.118.215 22 <nil> <nil>}
	I0408 23:51:34.800214    7680 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0408 23:51:34.935220    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744156294.961864315
	
	I0408 23:51:34.935220    7680 fix.go:216] guest clock: 1744156294.961864315
	I0408 23:51:34.935220    7680 fix.go:229] Guest: 2025-04-08 23:51:34.961864315 +0000 UTC Remote: 2025-04-08 23:51:30.2253864 +0000 UTC m=+324.590838901 (delta=4.736477915s)
	I0408 23:51:34.935220    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:51:36.991554    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:51:36.991554    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:36.991641    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:51:39.518867    7680 main.go:141] libmachine: [stdout =====>] : 192.168.118.215
	
	I0408 23:51:39.518867    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:39.524967    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:51:39.525498    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.118.215 22 <nil> <nil>}
	I0408 23:51:39.525498    7680 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1744156294
	I0408 23:51:39.679155    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr  8 23:51:34 UTC 2025
	
	I0408 23:51:39.679155    7680 fix.go:236] clock set: Tue Apr  8 23:51:34 UTC 2025
	 (err=<nil>)
	I0408 23:51:39.679155    7680 start.go:83] releasing machines lock for "ha-061400-m02", held for 2m14.570947s
	I0408 23:51:39.679348    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:51:41.790733    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:51:41.790733    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:41.790733    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:51:44.285376    7680 main.go:141] libmachine: [stdout =====>] : 192.168.118.215
	
	I0408 23:51:44.286080    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:44.289066    7680 out.go:177] * Found network options:
	I0408 23:51:44.292754    7680 out.go:177]   - NO_PROXY=192.168.119.206
	W0408 23:51:44.295422    7680 proxy.go:119] fail to check proxy env: Error ip not in block
	I0408 23:51:44.298087    7680 out.go:177]   - NO_PROXY=192.168.119.206
	W0408 23:51:44.300514    7680 proxy.go:119] fail to check proxy env: Error ip not in block
	W0408 23:51:44.302073    7680 proxy.go:119] fail to check proxy env: Error ip not in block
	I0408 23:51:44.303979    7680 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0408 23:51:44.303979    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:51:44.313538    7680 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0408 23:51:44.313538    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:51:46.531342    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:51:46.531342    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:46.531342    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:51:46.589783    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:51:46.590295    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:46.590492    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:51:49.153660    7680 main.go:141] libmachine: [stdout =====>] : 192.168.118.215
	
	I0408 23:51:49.153660    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:49.154261    7680 sshutil.go:53] new ssh client: &{IP:192.168.118.215 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m02\id_rsa Username:docker}
	I0408 23:51:49.191618    7680 main.go:141] libmachine: [stdout =====>] : 192.168.118.215
	
	I0408 23:51:49.191618    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:49.191901    7680 sshutil.go:53] new ssh client: &{IP:192.168.118.215 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m02\id_rsa Username:docker}
	I0408 23:51:49.251700    7680 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9476551s)
	W0408 23:51:49.251700    7680 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0408 23:51:49.286590    7680 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.9729866s)
	W0408 23:51:49.286590    7680 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 23:51:49.300131    7680 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 23:51:49.331137    7680 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 23:51:49.331201    7680 start.go:495] detecting cgroup driver to use...
	I0408 23:51:49.331454    7680 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 23:51:49.374914    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0408 23:51:49.405720    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W0408 23:51:49.416815    7680 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0408 23:51:49.416887    7680 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0408 23:51:49.428021    7680 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0408 23:51:49.438732    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0408 23:51:49.468979    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 23:51:49.502834    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0408 23:51:49.530402    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 23:51:49.561734    7680 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 23:51:49.592054    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0408 23:51:49.620273    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0408 23:51:49.649398    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0408 23:51:49.679367    7680 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 23:51:49.696698    7680 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 23:51:49.707474    7680 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 23:51:49.739920    7680 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 23:51:49.768525    7680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:51:49.958388    7680 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0408 23:51:49.990761    7680 start.go:495] detecting cgroup driver to use...
	I0408 23:51:50.002571    7680 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0408 23:51:50.037454    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 23:51:50.068632    7680 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 23:51:50.110899    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 23:51:50.144867    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0408 23:51:50.176622    7680 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0408 23:51:50.236348    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0408 23:51:50.260696    7680 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 23:51:50.306903    7680 ssh_runner.go:195] Run: which cri-dockerd
	I0408 23:51:50.323778    7680 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0408 23:51:50.339812    7680 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0408 23:51:50.390340    7680 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0408 23:51:50.589983    7680 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0408 23:51:50.771160    7680 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0408 23:51:50.771268    7680 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0408 23:51:50.813676    7680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:51:51.014877    7680 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0408 23:51:53.595452    7680 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5805415s)
	I0408 23:51:53.605124    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0408 23:51:53.639109    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0408 23:51:53.676568    7680 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0408 23:51:53.851837    7680 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0408 23:51:54.032978    7680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:51:54.218859    7680 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0408 23:51:54.258094    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0408 23:51:54.290848    7680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:51:54.473830    7680 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0408 23:51:54.582402    7680 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0408 23:51:54.595350    7680 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0408 23:51:54.604136    7680 start.go:563] Will wait 60s for crictl version
	I0408 23:51:54.613815    7680 ssh_runner.go:195] Run: which crictl
	I0408 23:51:54.630092    7680 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 23:51:54.685019    7680 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0408 23:51:54.695653    7680 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0408 23:51:54.736307    7680 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0408 23:51:54.775694    7680 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 27.4.0 ...
	I0408 23:51:54.779495    7680 out.go:177]   - env NO_PROXY=192.168.119.206
	I0408 23:51:54.782726    7680 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0408 23:51:54.786962    7680 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0408 23:51:54.786962    7680 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0408 23:51:54.786962    7680 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0408 23:51:54.786962    7680 ip.go:211] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:f4:da:75 Flags:up|broadcast|multicast|running}
	I0408 23:51:54.789904    7680 ip.go:214] interface addr: fe80::e8ab:9cc6:22b1:a5fc/64
	I0408 23:51:54.789904    7680 ip.go:214] interface addr: 192.168.112.1/20
	I0408 23:51:54.799936    7680 ssh_runner.go:195] Run: grep 192.168.112.1	host.minikube.internal$ /etc/hosts
	I0408 23:51:54.806531    7680 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.112.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 23:51:54.826947    7680 mustload.go:65] Loading cluster: ha-061400
	I0408 23:51:54.827195    7680 config.go:182] Loaded profile config "ha-061400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0408 23:51:54.828344    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:51:56.934099    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:51:56.934163    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:56.934163    7680 host.go:66] Checking if "ha-061400" exists ...
	I0408 23:51:56.934964    7680 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400 for IP: 192.168.118.215
	I0408 23:51:56.934964    7680 certs.go:194] generating shared ca certs ...
	I0408 23:51:56.934964    7680 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 23:51:56.935885    7680 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0408 23:51:56.936465    7680 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0408 23:51:56.936684    7680 certs.go:256] generating profile certs ...
	I0408 23:51:56.936979    7680 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\client.key
	I0408 23:51:56.937585    7680 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key.b63c1d01
	I0408 23:51:56.937644    7680 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt.b63c1d01 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.119.206 192.168.118.215 192.168.127.254]
	I0408 23:51:57.251981    7680 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt.b63c1d01 ...
	I0408 23:51:57.251981    7680 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt.b63c1d01: {Name:mk302d2222fa2b96163094148d492cc5223092ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 23:51:57.251981    7680 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key.b63c1d01 ...
	I0408 23:51:57.251981    7680 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key.b63c1d01: {Name:mk852e0eda79569f305cf26eff880333ce4f458a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 23:51:57.251981    7680 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt.b63c1d01 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt
	I0408 23:51:57.277431    7680 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key.b63c1d01 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key
	I0408 23:51:57.279302    7680 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\proxy-client.key
	I0408 23:51:57.279302    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0408 23:51:57.279493    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0408 23:51:57.279708    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0408 23:51:57.279832    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0408 23:51:57.280032    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0408 23:51:57.280151    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0408 23:51:57.280151    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0408 23:51:57.280151    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0408 23:51:57.280995    7680 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9864.pem (1338 bytes)
	W0408 23:51:57.281583    7680 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9864_empty.pem, impossibly tiny 0 bytes
	I0408 23:51:57.281734    7680 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0408 23:51:57.282139    7680 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0408 23:51:57.282439    7680 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0408 23:51:57.282735    7680 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0408 23:51:57.283168    7680 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem (1708 bytes)
	I0408 23:51:57.283168    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0408 23:51:57.283865    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9864.pem -> /usr/share/ca-certificates/9864.pem
	I0408 23:51:57.283934    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> /usr/share/ca-certificates/98642.pem
	I0408 23:51:57.284306    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:51:59.454137    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:51:59.454137    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:59.454137    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:52:01.939992    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:52:01.939992    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:52:01.941850    7680 sshutil.go:53] new ssh client: &{IP:192.168.119.206 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400\id_rsa Username:docker}
	I0408 23:52:02.052842    7680 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0408 23:52:02.061481    7680 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0408 23:52:02.090806    7680 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0408 23:52:02.099584    7680 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0408 23:52:02.133515    7680 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0408 23:52:02.140950    7680 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0408 23:52:02.175432    7680 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0408 23:52:02.181931    7680 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0408 23:52:02.215461    7680 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0408 23:52:02.223444    7680 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0408 23:52:02.263407    7680 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0408 23:52:02.270435    7680 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0408 23:52:02.300178    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 23:52:02.351996    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 23:52:02.404581    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 23:52:02.450228    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0408 23:52:02.496740    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0408 23:52:02.543093    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0408 23:52:02.588336    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 23:52:02.633048    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0408 23:52:02.678867    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 23:52:02.733547    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9864.pem --> /usr/share/ca-certificates/9864.pem (1338 bytes)
	I0408 23:52:02.787720    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem --> /usr/share/ca-certificates/98642.pem (1708 bytes)
	I0408 23:52:02.830338    7680 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0408 23:52:02.861814    7680 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0408 23:52:02.891872    7680 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0408 23:52:02.921895    7680 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0408 23:52:02.952372    7680 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0408 23:52:02.986801    7680 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0408 23:52:03.019812    7680 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0408 23:52:03.068330    7680 ssh_runner.go:195] Run: openssl version
	I0408 23:52:03.088144    7680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98642.pem && ln -fs /usr/share/ca-certificates/98642.pem /etc/ssl/certs/98642.pem"
	I0408 23:52:03.120271    7680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98642.pem
	I0408 23:52:03.127455    7680 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 23:04 /usr/share/ca-certificates/98642.pem
	I0408 23:52:03.139120    7680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98642.pem
	I0408 23:52:03.161640    7680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/98642.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 23:52:03.193296    7680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 23:52:03.224195    7680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 23:52:03.232335    7680 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0408 23:52:03.242574    7680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 23:52:03.262747    7680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 23:52:03.294542    7680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9864.pem && ln -fs /usr/share/ca-certificates/9864.pem /etc/ssl/certs/9864.pem"
	I0408 23:52:03.326329    7680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9864.pem
	I0408 23:52:03.333746    7680 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 23:04 /usr/share/ca-certificates/9864.pem
	I0408 23:52:03.345174    7680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9864.pem
	I0408 23:52:03.364942    7680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9864.pem /etc/ssl/certs/51391683.0"
	I0408 23:52:03.399479    7680 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 23:52:03.407531    7680 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0408 23:52:03.407531    7680 kubeadm.go:934] updating node {m02 192.168.118.215 8443 v1.32.2 docker true true} ...
	I0408 23:52:03.408059    7680 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-061400-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.118.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-061400 Namespace:default APIServerHAVIP:192.168.127.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 23:52:03.408059    7680 kube-vip.go:115] generating kube-vip config ...
	I0408 23:52:03.420771    7680 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0408 23:52:03.454516    7680 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0408 23:52:03.454516    7680 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.127.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0408 23:52:03.468702    7680 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0408 23:52:03.487066    7680 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
	
	Initiating transfer...
	I0408 23:52:03.501840    7680 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
	I0408 23:52:03.531453    7680 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubeadm
	I0408 23:52:03.531650    7680 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubectl
	I0408 23:52:03.531650    7680 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubelet
	I0408 23:52:04.987172    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubectl -> /var/lib/minikube/binaries/v1.32.2/kubectl
	I0408 23:52:04.996795    7680 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl
	I0408 23:52:05.003835    7680 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubectl': No such file or directory
	I0408 23:52:05.004793    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubectl --> /var/lib/minikube/binaries/v1.32.2/kubectl (57323672 bytes)
	I0408 23:52:05.231918    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubeadm -> /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0408 23:52:05.242921    7680 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0408 23:52:05.251909    7680 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubeadm': No such file or directory
	I0408 23:52:05.251909    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubeadm --> /var/lib/minikube/binaries/v1.32.2/kubeadm (70942872 bytes)
	I0408 23:52:05.263926    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 23:52:05.319632    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubelet -> /var/lib/minikube/binaries/v1.32.2/kubelet
	I0408 23:52:05.331906    7680 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet
	I0408 23:52:05.348891    7680 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubelet': No such file or directory
	I0408 23:52:05.348958    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubelet --> /var/lib/minikube/binaries/v1.32.2/kubelet (77406468 bytes)
	I0408 23:52:06.270412    7680 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0408 23:52:06.289220    7680 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0408 23:52:06.333913    7680 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 23:52:06.365720    7680 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1443 bytes)
	I0408 23:52:06.411799    7680 ssh_runner.go:195] Run: grep 192.168.127.254	control-plane.minikube.internal$ /etc/hosts
	I0408 23:52:06.417614    7680 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.127.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 23:52:06.453793    7680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:52:06.660845    7680 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 23:52:06.693729    7680 host.go:66] Checking if "ha-061400" exists ...
	I0408 23:52:06.694629    7680 start.go:317] joinCluster: &{Name:ha-061400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-061400 Namespace:def
ault APIServerHAVIP:192.168.127.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.119.206 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.118.215 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 23:52:06.694629    7680 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0408 23:52:06.694629    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:52:08.810408    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:52:08.810408    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:52:08.810408    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:52:11.380606    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:52:11.381491    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:52:11.381491    7680 sshutil.go:53] new ssh client: &{IP:192.168.119.206 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400\id_rsa Username:docker}
	I0408 23:52:11.983301    7680 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm token create --print-join-command --ttl=0": (5.2885242s)
	I0408 23:52:11.983450    7680 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.118.215 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 23:52:11.983565    7680 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 67n8ol.hj0bx7fxbu2j590a --discovery-token-ca-cert-hash sha256:aa5a4dda055a1a4ae6c54f5bc7c6626b2903d2da5858116de66a68e5e1fbf334 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-061400-m02 --control-plane --apiserver-advertise-address=192.168.118.215 --apiserver-bind-port=8443"
	I0408 23:52:52.890535    7680 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 67n8ol.hj0bx7fxbu2j590a --discovery-token-ca-cert-hash sha256:aa5a4dda055a1a4ae6c54f5bc7c6626b2903d2da5858116de66a68e5e1fbf334 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-061400-m02 --control-plane --apiserver-advertise-address=192.168.118.215 --apiserver-bind-port=8443": (40.9064324s)
	I0408 23:52:52.890535    7680 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0408 23:52:53.604714    7680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-061400-m02 minikube.k8s.io/updated_at=2025_04_08T23_52_53_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=fd2f4c3eba2bd452b5997c855e28d0966165ba83 minikube.k8s.io/name=ha-061400 minikube.k8s.io/primary=false
	I0408 23:52:53.780370    7680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-061400-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0408 23:52:53.974016    7680 start.go:319] duration metric: took 47.2787653s to joinCluster
	I0408 23:52:53.975070    7680 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.118.215 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 23:52:53.975859    7680 config.go:182] Loaded profile config "ha-061400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0408 23:52:53.978145    7680 out.go:177] * Verifying Kubernetes components...
	I0408 23:52:53.995071    7680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:52:54.349110    7680 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 23:52:54.386374    7680 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0408 23:52:54.387021    7680 kapi.go:59] client config for ha-061400: &rest.Config{Host:"https://192.168.127.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-061400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-061400\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2809400), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0408 23:52:54.387173    7680 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.127.254:8443 with https://192.168.119.206:8443
	I0408 23:52:54.388356    7680 node_ready.go:35] waiting up to 6m0s for node "ha-061400-m02" to be "Ready" ...
	I0408 23:52:54.388684    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:52:54.388741    7680 round_trippers.go:476] Request Headers:
	I0408 23:52:54.388773    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:52:54.388773    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:52:54.410067    7680 round_trippers.go:581] Response Status: 200 OK in 21 milliseconds
	I0408 23:52:54.888845    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:52:54.889437    7680 round_trippers.go:476] Request Headers:
	I0408 23:52:54.889437    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:52:54.889437    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:52:54.894111    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:52:55.390279    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:52:55.390279    7680 round_trippers.go:476] Request Headers:
	I0408 23:52:55.390279    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:52:55.390279    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:52:55.396588    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:52:55.888932    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:52:55.888932    7680 round_trippers.go:476] Request Headers:
	I0408 23:52:55.888932    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:52:55.888932    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:52:55.895046    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:52:56.389053    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:52:56.389053    7680 round_trippers.go:476] Request Headers:
	I0408 23:52:56.389053    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:52:56.389053    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:52:56.394117    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:52:56.395125    7680 node_ready.go:53] node "ha-061400-m02" has status "Ready":"False"
	I0408 23:52:56.889777    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:52:56.889777    7680 round_trippers.go:476] Request Headers:
	I0408 23:52:56.889777    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:52:56.889777    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:52:56.895742    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:52:57.389060    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:52:57.389060    7680 round_trippers.go:476] Request Headers:
	I0408 23:52:57.389060    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:52:57.389060    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:52:57.393910    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:52:57.889146    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:52:57.889146    7680 round_trippers.go:476] Request Headers:
	I0408 23:52:57.889146    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:52:57.889335    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:52:58.032379    7680 round_trippers.go:581] Response Status: 200 OK in 143 milliseconds
	I0408 23:52:58.389498    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:52:58.389498    7680 round_trippers.go:476] Request Headers:
	I0408 23:52:58.389562    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:52:58.389685    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:52:58.393061    7680 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0408 23:52:58.889096    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:52:58.889096    7680 round_trippers.go:476] Request Headers:
	I0408 23:52:58.889096    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:52:58.889096    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:52:58.895277    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:52:58.896611    7680 node_ready.go:53] node "ha-061400-m02" has status "Ready":"False"
	I0408 23:52:59.388912    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:52:59.388912    7680 round_trippers.go:476] Request Headers:
	I0408 23:52:59.388912    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:52:59.388912    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:52:59.417410    7680 round_trippers.go:581] Response Status: 200 OK in 28 milliseconds
	I0408 23:52:59.888966    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:52:59.888966    7680 round_trippers.go:476] Request Headers:
	I0408 23:52:59.888966    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:52:59.888966    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:52:59.895278    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:53:00.389460    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:00.389460    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:00.389460    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:00.389460    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:00.394065    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:53:00.888774    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:00.888774    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:00.888774    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:00.888774    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:00.895468    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:53:01.389055    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:01.389055    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:01.389055    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:01.389055    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:01.393007    7680 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0408 23:53:01.393997    7680 node_ready.go:53] node "ha-061400-m02" has status "Ready":"False"
	I0408 23:53:01.889420    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:01.889420    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:01.889420    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:01.889420    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:01.896280    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:53:02.389484    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:02.389484    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:02.389484    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:02.389484    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:02.395478    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:53:02.889677    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:02.889739    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:02.889739    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:02.889739    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:02.895057    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:53:03.389064    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:03.389064    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:03.389064    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:03.389064    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:03.393802    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:53:03.890026    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:03.890026    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:03.890026    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:03.890026    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:03.896182    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:53:03.896751    7680 node_ready.go:53] node "ha-061400-m02" has status "Ready":"False"
	I0408 23:53:04.389771    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:04.389811    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:04.389811    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:04.389865    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:04.401590    7680 round_trippers.go:581] Response Status: 200 OK in 11 milliseconds
	I0408 23:53:04.890418    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:04.890418    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:04.890418    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:04.890418    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:04.901358    7680 round_trippers.go:581] Response Status: 200 OK in 10 milliseconds
	I0408 23:53:05.389510    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:05.389510    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:05.389510    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:05.389510    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:05.394479    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:53:05.889453    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:05.889453    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:05.889453    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:05.889453    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:05.895858    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:53:06.389631    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:06.389631    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:06.389631    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:06.389631    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:06.400111    7680 round_trippers.go:581] Response Status: 200 OK in 10 milliseconds
	I0408 23:53:06.400489    7680 node_ready.go:53] node "ha-061400-m02" has status "Ready":"False"
	I0408 23:53:06.888748    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:06.888748    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:06.888748    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:06.888748    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:06.894994    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:53:07.389708    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:07.389780    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:07.389780    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:07.389861    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:07.394665    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:53:07.890273    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:07.890401    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:07.890401    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:07.890401    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:07.896090    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:53:08.389580    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:08.389580    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:08.389580    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:08.389580    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:08.395224    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:53:08.888944    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:08.888944    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:08.888944    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:08.888944    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:08.894268    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:53:08.896008    7680 node_ready.go:53] node "ha-061400-m02" has status "Ready":"False"
	I0408 23:53:09.388721    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:09.388721    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:09.388721    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:09.388721    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:09.394323    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:53:09.889461    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:09.889461    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:09.889461    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:09.889461    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:09.895937    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:53:10.389464    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:10.389510    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:10.389510    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:10.389510    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:10.393909    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:53:10.888939    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:10.888939    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:10.888939    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:10.888939    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:10.895108    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:53:11.388991    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:11.388991    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:11.388991    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:11.388991    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:11.393483    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:53:11.394630    7680 node_ready.go:53] node "ha-061400-m02" has status "Ready":"False"
	I0408 23:53:11.889362    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:11.889362    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:11.889362    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:11.889362    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:11.895192    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:53:12.389187    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:12.389187    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:12.389187    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:12.389187    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:12.400576    7680 round_trippers.go:581] Response Status: 200 OK in 11 milliseconds
	I0408 23:53:12.888833    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:12.888833    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:12.888833    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:12.888833    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:12.894857    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:53:13.389165    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:13.389165    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:13.389165    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:13.389165    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:13.397967    7680 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0408 23:53:13.398762    7680 node_ready.go:53] node "ha-061400-m02" has status "Ready":"False"
	I0408 23:53:13.888933    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:13.888933    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:13.888933    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:13.888933    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:13.895271    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:53:14.389924    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:14.390010    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:14.390010    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:14.390069    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:14.392996    7680 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0408 23:53:14.889808    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:14.889808    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:14.889974    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:14.889974    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:14.895868    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:53:14.896228    7680 node_ready.go:49] node "ha-061400-m02" has status "Ready":"True"
	I0408 23:53:14.896316    7680 node_ready.go:38] duration metric: took 20.50763s for node "ha-061400-m02" to be "Ready" ...
	I0408 23:53:14.896440    7680 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 23:53:14.896633    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods
	I0408 23:53:14.896633    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:14.896747    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:14.896747    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:14.900996    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:53:14.905084    7680 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-rzk8c" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:14.905290    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-rzk8c
	I0408 23:53:14.905290    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:14.905290    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:14.905348    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:14.914639    7680 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0408 23:53:14.915332    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:53:14.915332    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:14.915332    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:14.915332    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:14.919303    7680 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0408 23:53:14.919610    7680 pod_ready.go:93] pod "coredns-668d6bf9bc-rzk8c" in "kube-system" namespace has status "Ready":"True"
	I0408 23:53:14.919702    7680 pod_ready.go:82] duration metric: took 14.6183ms for pod "coredns-668d6bf9bc-rzk8c" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:14.919702    7680 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-scvcr" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:14.919824    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-scvcr
	I0408 23:53:14.919851    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:14.919894    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:14.919894    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:14.924173    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:53:14.924760    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:53:14.924760    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:14.924760    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:14.924760    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:14.928824    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:53:14.929473    7680 pod_ready.go:93] pod "coredns-668d6bf9bc-scvcr" in "kube-system" namespace has status "Ready":"True"
	I0408 23:53:14.929503    7680 pod_ready.go:82] duration metric: took 9.8006ms for pod "coredns-668d6bf9bc-scvcr" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:14.929503    7680 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-061400" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:14.929692    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/etcd-ha-061400
	I0408 23:53:14.929692    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:14.929692    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:14.929692    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:14.932989    7680 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0408 23:53:14.932989    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:53:14.932989    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:14.932989    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:14.932989    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:14.937078    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:53:14.937919    7680 pod_ready.go:93] pod "etcd-ha-061400" in "kube-system" namespace has status "Ready":"True"
	I0408 23:53:14.937919    7680 pod_ready.go:82] duration metric: took 8.3451ms for pod "etcd-ha-061400" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:14.937982    7680 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-061400-m02" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:14.938071    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/etcd-ha-061400-m02
	I0408 23:53:14.938071    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:14.938132    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:14.938132    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:14.945844    7680 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0408 23:53:14.946393    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:14.946393    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:14.946393    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:14.946393    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:14.948579    7680 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0408 23:53:14.949680    7680 pod_ready.go:93] pod "etcd-ha-061400-m02" in "kube-system" namespace has status "Ready":"True"
	I0408 23:53:14.949680    7680 pod_ready.go:82] duration metric: took 11.6982ms for pod "etcd-ha-061400-m02" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:14.949728    7680 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-061400" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:15.089872    7680 request.go:661] Waited for 140.1414ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-061400
	I0408 23:53:15.089872    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-061400
	I0408 23:53:15.089872    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:15.089872    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:15.089872    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:15.100282    7680 round_trippers.go:581] Response Status: 200 OK in 10 milliseconds
	I0408 23:53:15.290287    7680 request.go:661] Waited for 187.8719ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:53:15.290287    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:53:15.290287    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:15.290287    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:15.290287    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:15.300468    7680 round_trippers.go:581] Response Status: 200 OK in 10 milliseconds
	I0408 23:53:15.300612    7680 pod_ready.go:93] pod "kube-apiserver-ha-061400" in "kube-system" namespace has status "Ready":"True"
	I0408 23:53:15.300612    7680 pod_ready.go:82] duration metric: took 350.8788ms for pod "kube-apiserver-ha-061400" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:15.300612    7680 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-061400-m02" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:15.490532    7680 request.go:661] Waited for 189.9175ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-061400-m02
	I0408 23:53:15.490981    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-061400-m02
	I0408 23:53:15.490981    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:15.490981    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:15.491142    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:15.496996    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:53:15.690514    7680 request.go:661] Waited for 193.1252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:15.690514    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:15.690514    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:15.690514    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:15.690514    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:15.696202    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:53:15.696888    7680 pod_ready.go:93] pod "kube-apiserver-ha-061400-m02" in "kube-system" namespace has status "Ready":"True"
	I0408 23:53:15.696888    7680 pod_ready.go:82] duration metric: took 396.2709ms for pod "kube-apiserver-ha-061400-m02" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:15.696888    7680 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-061400" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:15.890032    7680 request.go:661] Waited for 192.6554ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-061400
	I0408 23:53:15.890032    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-061400
	I0408 23:53:15.890526    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:15.890526    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:15.890580    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:15.907152    7680 round_trippers.go:581] Response Status: 200 OK in 16 milliseconds
	I0408 23:53:16.089843    7680 request.go:661] Waited for 181.9354ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:53:16.090291    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:53:16.090291    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:16.090291    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:16.090291    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:16.095941    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:53:16.095941    7680 pod_ready.go:93] pod "kube-controller-manager-ha-061400" in "kube-system" namespace has status "Ready":"True"
	I0408 23:53:16.095941    7680 pod_ready.go:82] duration metric: took 399.0483ms for pod "kube-controller-manager-ha-061400" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:16.095941    7680 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-061400-m02" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:16.290598    7680 request.go:661] Waited for 194.6541ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-061400-m02
	I0408 23:53:16.290598    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-061400-m02
	I0408 23:53:16.290598    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:16.290598    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:16.290598    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:16.296828    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:53:16.489506    7680 request.go:661] Waited for 191.7759ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:16.489506    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:16.489506    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:16.489506    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:16.489506    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:16.495375    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:53:16.495732    7680 pod_ready.go:93] pod "kube-controller-manager-ha-061400-m02" in "kube-system" namespace has status "Ready":"True"
	I0408 23:53:16.495732    7680 pod_ready.go:82] duration metric: took 399.7848ms for pod "kube-controller-manager-ha-061400-m02" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:16.495732    7680 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lr9jb" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:16.689565    7680 request.go:661] Waited for 193.5779ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lr9jb
	I0408 23:53:16.689565    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lr9jb
	I0408 23:53:16.689565    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:16.689565    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:16.689565    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:16.696231    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:53:16.890217    7680 request.go:661] Waited for 192.957ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:53:16.890217    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:53:16.890217    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:16.890217    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:16.890217    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:16.896072    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:53:16.896721    7680 pod_ready.go:93] pod "kube-proxy-lr9jb" in "kube-system" namespace has status "Ready":"True"
	I0408 23:53:16.896721    7680 pod_ready.go:82] duration metric: took 400.798ms for pod "kube-proxy-lr9jb" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:16.896776    7680 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nkwqr" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:17.089757    7680 request.go:661] Waited for 192.8919ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nkwqr
	I0408 23:53:17.089757    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nkwqr
	I0408 23:53:17.090188    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:17.090188    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:17.090188    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:17.095005    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:53:17.289698    7680 request.go:661] Waited for 194.5127ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:17.289698    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:17.289698    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:17.289698    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:17.289698    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:17.297131    7680 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0408 23:53:17.297614    7680 pod_ready.go:93] pod "kube-proxy-nkwqr" in "kube-system" namespace has status "Ready":"True"
	I0408 23:53:17.297667    7680 pod_ready.go:82] duration metric: took 400.8855ms for pod "kube-proxy-nkwqr" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:17.297667    7680 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-061400" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:17.490575    7680 request.go:661] Waited for 192.9054ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-061400
	I0408 23:53:17.491192    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-061400
	I0408 23:53:17.491192    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:17.491192    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:17.491192    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:17.496937    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:53:17.689970    7680 request.go:661] Waited for 192.6087ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:53:17.689970    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:53:17.689970    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:17.689970    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:17.689970    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:17.695445    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:53:17.695781    7680 pod_ready.go:93] pod "kube-scheduler-ha-061400" in "kube-system" namespace has status "Ready":"True"
	I0408 23:53:17.695922    7680 pod_ready.go:82] duration metric: took 398.109ms for pod "kube-scheduler-ha-061400" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:17.695922    7680 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-061400-m02" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:17.889517    7680 request.go:661] Waited for 193.5927ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-061400-m02
	I0408 23:53:17.889517    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-061400-m02
	I0408 23:53:17.889517    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:17.889517    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:17.889517    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:17.894627    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:53:18.090665    7680 request.go:661] Waited for 195.6453ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:18.090665    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:18.090665    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:18.090665    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:18.090665    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:18.097490    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:53:18.097977    7680 pod_ready.go:93] pod "kube-scheduler-ha-061400-m02" in "kube-system" namespace has status "Ready":"True"
	I0408 23:53:18.098086    7680 pod_ready.go:82] duration metric: took 402.1585ms for pod "kube-scheduler-ha-061400-m02" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:18.098086    7680 pod_ready.go:39] duration metric: took 3.2016031s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 23:53:18.098193    7680 api_server.go:52] waiting for apiserver process to appear ...
	I0408 23:53:18.110025    7680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 23:53:18.137634    7680 api_server.go:72] duration metric: took 24.1622444s to wait for apiserver process to appear ...
	I0408 23:53:18.137634    7680 api_server.go:88] waiting for apiserver healthz status ...
	I0408 23:53:18.137634    7680 api_server.go:253] Checking apiserver healthz at https://192.168.119.206:8443/healthz ...
	I0408 23:53:18.155108    7680 api_server.go:279] https://192.168.119.206:8443/healthz returned 200:
	ok
	I0408 23:53:18.155358    7680 round_trippers.go:470] GET https://192.168.119.206:8443/version
	I0408 23:53:18.155443    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:18.155443    7680 round_trippers.go:480]     Accept: application/json, */*
	I0408 23:53:18.155443    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:18.157185    7680 round_trippers.go:581] Response Status: 200 OK in 1 milliseconds
	I0408 23:53:18.157185    7680 api_server.go:141] control plane version: v1.32.2
	I0408 23:53:18.157185    7680 api_server.go:131] duration metric: took 19.5511ms to wait for apiserver health ...
	I0408 23:53:18.157185    7680 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 23:53:18.290358    7680 request.go:661] Waited for 133.1716ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods
	I0408 23:53:18.290358    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods
	I0408 23:53:18.290358    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:18.290358    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:18.290358    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:18.297373    7680 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0408 23:53:18.302165    7680 system_pods.go:59] 17 kube-system pods found
	I0408 23:53:18.302236    7680 system_pods.go:61] "coredns-668d6bf9bc-rzk8c" [18f6703f-34ad-403f-b86d-9a8f3dc927a0] Running
	I0408 23:53:18.302344    7680 system_pods.go:61] "coredns-668d6bf9bc-scvcr" [952efdd7-d201-4747-833a-59e05925e74f] Running
	I0408 23:53:18.302344    7680 system_pods.go:61] "etcd-ha-061400" [429dfaa4-c9bf-47dc-81f9-ab33ad3acee4] Running
	I0408 23:53:18.302344    7680 system_pods.go:61] "etcd-ha-061400-m02" [5fa6b2de-e3e8-4c95-84e3-3e344ce6a56f] Running
	I0408 23:53:18.302344    7680 system_pods.go:61] "kindnet-44mc6" [a8a857e1-90f1-4346-97a7-0b083352aeda] Running
	I0408 23:53:18.302344    7680 system_pods.go:61] "kindnet-7mvqz" [3fcc4494-1878-48e2-97ee-f76dcff55c29] Running
	I0408 23:53:18.302402    7680 system_pods.go:61] "kube-apiserver-ha-061400" [488f7097-53fd-4754-aa77-78aed24b3494] Running
	I0408 23:53:18.302402    7680 system_pods.go:61] "kube-apiserver-ha-061400-m02" [1f83551d-39c0-4485-b4a6-d44c3e58b435] Running
	I0408 23:53:18.302402    7680 system_pods.go:61] "kube-controller-manager-ha-061400" [28c1163e-e283-49b0-bab7-b91d1b73ab27] Running
	I0408 23:53:18.302444    7680 system_pods.go:61] "kube-controller-manager-ha-061400-m02" [89ab7c55-91a9-452b-9c0e-3673bf608abc] Running
	I0408 23:53:18.302444    7680 system_pods.go:61] "kube-proxy-lr9jb" [4ea29fd2-fb54-44d7-a558-a272fd4f05f5] Running
	I0408 23:53:18.302482    7680 system_pods.go:61] "kube-proxy-nkwqr" [20f509f0-ca9e-4464-b87f-e5d226ce9e3c] Running
	I0408 23:53:18.302482    7680 system_pods.go:61] "kube-scheduler-ha-061400" [b16bc563-a6aa-49d3-b7c4-74b5827bb66e] Running
	I0408 23:53:18.302482    7680 system_pods.go:61] "kube-scheduler-ha-061400-m02" [e9a386c4-fe99-49d0-bff9-d434ba81d735] Running
	I0408 23:53:18.302482    7680 system_pods.go:61] "kube-vip-ha-061400" [b677e4c1-39bf-459c-a33c-ecfce817e2a5] Running
	I0408 23:53:18.302482    7680 system_pods.go:61] "kube-vip-ha-061400-m02" [2a30dc1d-3208-468f-8614-a469337f5ac2] Running
	I0408 23:53:18.302482    7680 system_pods.go:61] "storage-provisioner" [bd11797d-cec8-419e-b7e9-1d537d9a7378] Running
	I0408 23:53:18.302482    7680 system_pods.go:74] duration metric: took 145.2951ms to wait for pod list to return data ...
	I0408 23:53:18.302579    7680 default_sa.go:34] waiting for default service account to be created ...
	I0408 23:53:18.489976    7680 request.go:661] Waited for 187.3685ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/default/serviceaccounts
	I0408 23:53:18.489976    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/default/serviceaccounts
	I0408 23:53:18.489976    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:18.489976    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:18.489976    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:18.495143    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:53:18.495468    7680 default_sa.go:45] found service account: "default"
	I0408 23:53:18.495468    7680 default_sa.go:55] duration metric: took 192.8863ms for default service account to be created ...
	I0408 23:53:18.495468    7680 system_pods.go:116] waiting for k8s-apps to be running ...
	I0408 23:53:18.690501    7680 request.go:661] Waited for 195.0304ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods
	I0408 23:53:18.690501    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods
	I0408 23:53:18.690501    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:18.690501    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:18.690501    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:18.696208    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:53:18.698979    7680 system_pods.go:86] 17 kube-system pods found
	I0408 23:53:18.699060    7680 system_pods.go:89] "coredns-668d6bf9bc-rzk8c" [18f6703f-34ad-403f-b86d-9a8f3dc927a0] Running
	I0408 23:53:18.699060    7680 system_pods.go:89] "coredns-668d6bf9bc-scvcr" [952efdd7-d201-4747-833a-59e05925e74f] Running
	I0408 23:53:18.699060    7680 system_pods.go:89] "etcd-ha-061400" [429dfaa4-c9bf-47dc-81f9-ab33ad3acee4] Running
	I0408 23:53:18.699060    7680 system_pods.go:89] "etcd-ha-061400-m02" [5fa6b2de-e3e8-4c95-84e3-3e344ce6a56f] Running
	I0408 23:53:18.699060    7680 system_pods.go:89] "kindnet-44mc6" [a8a857e1-90f1-4346-97a7-0b083352aeda] Running
	I0408 23:53:18.699060    7680 system_pods.go:89] "kindnet-7mvqz" [3fcc4494-1878-48e2-97ee-f76dcff55c29] Running
	I0408 23:53:18.699060    7680 system_pods.go:89] "kube-apiserver-ha-061400" [488f7097-53fd-4754-aa77-78aed24b3494] Running
	I0408 23:53:18.699060    7680 system_pods.go:89] "kube-apiserver-ha-061400-m02" [1f83551d-39c0-4485-b4a6-d44c3e58b435] Running
	I0408 23:53:18.699060    7680 system_pods.go:89] "kube-controller-manager-ha-061400" [28c1163e-e283-49b0-bab7-b91d1b73ab27] Running
	I0408 23:53:18.699060    7680 system_pods.go:89] "kube-controller-manager-ha-061400-m02" [89ab7c55-91a9-452b-9c0e-3673bf608abc] Running
	I0408 23:53:18.699060    7680 system_pods.go:89] "kube-proxy-lr9jb" [4ea29fd2-fb54-44d7-a558-a272fd4f05f5] Running
	I0408 23:53:18.699060    7680 system_pods.go:89] "kube-proxy-nkwqr" [20f509f0-ca9e-4464-b87f-e5d226ce9e3c] Running
	I0408 23:53:18.699129    7680 system_pods.go:89] "kube-scheduler-ha-061400" [b16bc563-a6aa-49d3-b7c4-74b5827bb66e] Running
	I0408 23:53:18.699234    7680 system_pods.go:89] "kube-scheduler-ha-061400-m02" [e9a386c4-fe99-49d0-bff9-d434ba81d735] Running
	I0408 23:53:18.699234    7680 system_pods.go:89] "kube-vip-ha-061400" [b677e4c1-39bf-459c-a33c-ecfce817e2a5] Running
	I0408 23:53:18.699347    7680 system_pods.go:89] "kube-vip-ha-061400-m02" [2a30dc1d-3208-468f-8614-a469337f5ac2] Running
	I0408 23:53:18.699347    7680 system_pods.go:89] "storage-provisioner" [bd11797d-cec8-419e-b7e9-1d537d9a7378] Running
	I0408 23:53:18.699347    7680 system_pods.go:126] duration metric: took 203.8759ms to wait for k8s-apps to be running ...
	I0408 23:53:18.699347    7680 system_svc.go:44] waiting for kubelet service to be running ....
	I0408 23:53:18.710357    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 23:53:18.736388    7680 system_svc.go:56] duration metric: took 37.0412ms WaitForService to wait for kubelet
	I0408 23:53:18.736388    7680 kubeadm.go:582] duration metric: took 24.7609912s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 23:53:18.737346    7680 node_conditions.go:102] verifying NodePressure condition ...
	I0408 23:53:18.889956    7680 request.go:661] Waited for 152.608ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes
	I0408 23:53:18.890540    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes
	I0408 23:53:18.890540    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:18.890540    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:18.890540    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:18.896069    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:53:18.896715    7680 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 23:53:18.896715    7680 node_conditions.go:123] node cpu capacity is 2
	I0408 23:53:18.896715    7680 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 23:53:18.896715    7680 node_conditions.go:123] node cpu capacity is 2
	I0408 23:53:18.896715    7680 node_conditions.go:105] duration metric: took 159.3663ms to run NodePressure ...
	I0408 23:53:18.896715    7680 start.go:241] waiting for startup goroutines ...
	I0408 23:53:18.896715    7680 start.go:255] writing updated cluster config ...
	I0408 23:53:18.901866    7680 out.go:201] 
	I0408 23:53:18.920442    7680 config.go:182] Loaded profile config "ha-061400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0408 23:53:18.921474    7680 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\config.json ...
	I0408 23:53:18.930708    7680 out.go:177] * Starting "ha-061400-m03" control-plane node in "ha-061400" cluster
	I0408 23:53:18.933750    7680 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0408 23:53:18.933861    7680 cache.go:56] Caching tarball of preloaded images
	I0408 23:53:18.934001    7680 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0408 23:53:18.934001    7680 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0408 23:53:18.934575    7680 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\config.json ...
	I0408 23:53:18.941401    7680 start.go:360] acquireMachinesLock for ha-061400-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 23:53:18.941401    7680 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-061400-m03"
	I0408 23:53:18.942067    7680 start.go:93] Provisioning new machine with config: &{Name:ha-061400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName
:ha-061400 Namespace:default APIServerHAVIP:192.168.127.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.119.206 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.118.215 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 23:53:18.942067    7680 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0408 23:53:18.948508    7680 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 23:53:18.949126    7680 start.go:159] libmachine.API.Create for "ha-061400" (driver="hyperv")
	I0408 23:53:18.949126    7680 client.go:168] LocalClient.Create starting
	I0408 23:53:18.949457    7680 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0408 23:53:18.949861    7680 main.go:141] libmachine: Decoding PEM data...
	I0408 23:53:18.949861    7680 main.go:141] libmachine: Parsing certificate...
	I0408 23:53:18.950131    7680 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0408 23:53:18.950454    7680 main.go:141] libmachine: Decoding PEM data...
	I0408 23:53:18.950454    7680 main.go:141] libmachine: Parsing certificate...
	I0408 23:53:18.950454    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0408 23:53:20.857708    7680 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0408 23:53:20.857708    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:53:20.857708    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0408 23:53:22.612466    7680 main.go:141] libmachine: [stdout =====>] : False
	
	I0408 23:53:22.612626    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:53:22.612717    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0408 23:53:24.101556    7680 main.go:141] libmachine: [stdout =====>] : True
	
	I0408 23:53:24.102107    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:53:24.102107    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0408 23:53:27.957971    7680 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0408 23:53:27.957971    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:53:27.960021    7680 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0408 23:53:28.398547    7680 main.go:141] libmachine: Creating SSH key...
	I0408 23:53:29.364239    7680 main.go:141] libmachine: Creating VM...
	I0408 23:53:29.364239    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0408 23:53:32.359359    7680 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0408 23:53:32.360143    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:53:32.360143    7680 main.go:141] libmachine: Using switch "Default Switch"
	I0408 23:53:32.360143    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0408 23:53:34.118084    7680 main.go:141] libmachine: [stdout =====>] : True
	
	I0408 23:53:34.118769    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:53:34.118769    7680 main.go:141] libmachine: Creating VHD
	I0408 23:53:34.118769    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0408 23:53:37.963116    7680 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 2390A371-F1B2-4C2A-ABA8-80A853D65317
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0408 23:53:37.963116    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:53:37.963116    7680 main.go:141] libmachine: Writing magic tar header
	I0408 23:53:37.963116    7680 main.go:141] libmachine: Writing SSH key tar header
	I0408 23:53:37.976936    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0408 23:53:41.190377    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:53:41.190496    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:53:41.190571    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m03\disk.vhd' -SizeBytes 20000MB
	I0408 23:53:43.812870    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:53:43.812870    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:53:43.812978    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-061400-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0408 23:53:47.455397    7680 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-061400-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0408 23:53:47.455472    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:53:47.455564    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-061400-m03 -DynamicMemoryEnabled $false
	I0408 23:53:49.703951    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:53:49.704804    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:53:49.705081    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-061400-m03 -Count 2
	I0408 23:53:51.891541    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:53:51.891541    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:53:51.892250    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-061400-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m03\boot2docker.iso'
	I0408 23:53:54.502456    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:53:54.502456    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:53:54.503112    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-061400-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m03\disk.vhd'
	I0408 23:53:57.182140    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:53:57.182140    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:53:57.182140    7680 main.go:141] libmachine: Starting VM...
	I0408 23:53:57.182140    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-061400-m03
	I0408 23:54:00.372427    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:54:00.372427    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:00.372427    7680 main.go:141] libmachine: Waiting for host to start...
	I0408 23:54:00.373189    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:54:02.706214    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:54:02.706680    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:02.706680    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:54:05.363187    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:54:05.363187    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:06.363876    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:54:08.661355    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:54:08.661355    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:08.661637    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:54:11.302687    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:54:11.302749    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:12.303322    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:54:14.580207    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:54:14.581252    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:14.581306    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:54:17.163857    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:54:17.164205    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:18.165191    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:54:20.433715    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:54:20.433715    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:20.433715    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:54:23.041252    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:54:23.041252    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:24.042955    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:54:26.353871    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:54:26.353871    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:26.354770    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:54:29.050023    7680 main.go:141] libmachine: [stdout =====>] : 192.168.126.102
	
	I0408 23:54:29.050023    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:29.050023    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:54:31.334140    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:54:31.335159    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:31.335234    7680 machine.go:93] provisionDockerMachine start ...
	I0408 23:54:31.335438    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:54:33.659626    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:54:33.660050    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:33.660174    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:54:36.294561    7680 main.go:141] libmachine: [stdout =====>] : 192.168.126.102
	
	I0408 23:54:36.294561    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:36.303606    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:54:36.304607    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.126.102 22 <nil> <nil>}
	I0408 23:54:36.304607    7680 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 23:54:36.445191    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0408 23:54:36.445191    7680 buildroot.go:166] provisioning hostname "ha-061400-m03"
	I0408 23:54:36.445292    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:54:38.656366    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:54:38.656366    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:38.656366    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:54:41.255971    7680 main.go:141] libmachine: [stdout =====>] : 192.168.126.102
	
	I0408 23:54:41.257094    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:41.262802    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:54:41.263356    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.126.102 22 <nil> <nil>}
	I0408 23:54:41.263466    7680 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-061400-m03 && echo "ha-061400-m03" | sudo tee /etc/hostname
	I0408 23:54:41.431476    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-061400-m03
	
	I0408 23:54:41.431822    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:54:43.611833    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:54:43.612719    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:43.612719    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:54:46.214054    7680 main.go:141] libmachine: [stdout =====>] : 192.168.126.102
	
	I0408 23:54:46.214054    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:46.220367    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:54:46.220492    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.126.102 22 <nil> <nil>}
	I0408 23:54:46.220492    7680 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-061400-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-061400-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-061400-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 23:54:46.378969    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 23:54:46.378969    7680 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0408 23:54:46.378969    7680 buildroot.go:174] setting up certificates
	I0408 23:54:46.378969    7680 provision.go:84] configureAuth start
	I0408 23:54:46.378969    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:54:48.542192    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:54:48.542192    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:48.542192    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:54:51.142868    7680 main.go:141] libmachine: [stdout =====>] : 192.168.126.102
	
	I0408 23:54:51.143675    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:51.143675    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:54:53.312251    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:54:53.313160    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:53.313160    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:54:55.862684    7680 main.go:141] libmachine: [stdout =====>] : 192.168.126.102
	
	I0408 23:54:55.862684    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:55.862684    7680 provision.go:143] copyHostCerts
	I0408 23:54:55.863595    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0408 23:54:55.863886    7680 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0408 23:54:55.863886    7680 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0408 23:54:55.864537    7680 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0408 23:54:55.865760    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0408 23:54:55.866066    7680 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0408 23:54:55.866066    7680 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0408 23:54:55.866066    7680 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0408 23:54:55.866912    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0408 23:54:55.867613    7680 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0408 23:54:55.867613    7680 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0408 23:54:55.867613    7680 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0408 23:54:55.869055    7680 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-061400-m03 san=[127.0.0.1 192.168.126.102 ha-061400-m03 localhost minikube]
	I0408 23:54:55.899472    7680 provision.go:177] copyRemoteCerts
	I0408 23:54:55.909473    7680 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 23:54:55.909473    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:54:58.076811    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:54:58.077010    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:58.077097    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:55:00.656887    7680 main.go:141] libmachine: [stdout =====>] : 192.168.126.102
	
	I0408 23:55:00.657142    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:00.657142    7680 sshutil.go:53] new ssh client: &{IP:192.168.126.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m03\id_rsa Username:docker}
	I0408 23:55:00.768275    7680 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8587372s)
	I0408 23:55:00.768275    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0408 23:55:00.768969    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0408 23:55:00.817148    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0408 23:55:00.817148    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0408 23:55:00.862216    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0408 23:55:00.862674    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0408 23:55:00.904960    7680 provision.go:87] duration metric: took 14.5257976s to configureAuth
	I0408 23:55:00.904960    7680 buildroot.go:189] setting minikube options for container-runtime
	I0408 23:55:00.906022    7680 config.go:182] Loaded profile config "ha-061400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0408 23:55:00.906248    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:55:03.068956    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:55:03.069792    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:03.069792    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:55:05.612828    7680 main.go:141] libmachine: [stdout =====>] : 192.168.126.102
	
	I0408 23:55:05.612828    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:05.618022    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:55:05.618746    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.126.102 22 <nil> <nil>}
	I0408 23:55:05.618746    7680 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0408 23:55:05.757172    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0408 23:55:05.757172    7680 buildroot.go:70] root file system type: tmpfs
	I0408 23:55:05.757172    7680 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0408 23:55:05.757172    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:55:07.927578    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:55:07.927578    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:07.927707    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:55:10.471367    7680 main.go:141] libmachine: [stdout =====>] : 192.168.126.102
	
	I0408 23:55:10.471367    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:10.478271    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:55:10.478824    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.126.102 22 <nil> <nil>}
	I0408 23:55:10.479017    7680 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.119.206"
	Environment="NO_PROXY=192.168.119.206,192.168.118.215"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0408 23:55:10.642222    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.119.206
	Environment=NO_PROXY=192.168.119.206,192.168.118.215
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0408 23:55:10.642222    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:55:12.846397    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:55:12.846397    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:12.847411    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:55:15.438884    7680 main.go:141] libmachine: [stdout =====>] : 192.168.126.102
	
	I0408 23:55:15.438884    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:15.445640    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:55:15.445791    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.126.102 22 <nil> <nil>}
	I0408 23:55:15.445791    7680 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0408 23:55:17.720100    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0408 23:55:17.720199    7680 machine.go:96] duration metric: took 46.3843472s to provisionDockerMachine
	I0408 23:55:17.720199    7680 client.go:171] duration metric: took 1m58.7694951s to LocalClient.Create
	I0408 23:55:17.720257    7680 start.go:167] duration metric: took 1m58.7695535s to libmachine.API.Create "ha-061400"
	I0408 23:55:17.720313    7680 start.go:293] postStartSetup for "ha-061400-m03" (driver="hyperv")
	I0408 23:55:17.720313    7680 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 23:55:17.730571    7680 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 23:55:17.730571    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:55:19.880328    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:55:19.881242    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:19.881242    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:55:22.604832    7680 main.go:141] libmachine: [stdout =====>] : 192.168.126.102
	
	I0408 23:55:22.604904    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:22.605601    7680 sshutil.go:53] new ssh client: &{IP:192.168.126.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m03\id_rsa Username:docker}
	I0408 23:55:22.727109    7680 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9964714s)
	I0408 23:55:22.746521    7680 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 23:55:22.754090    7680 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 23:55:22.754090    7680 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0408 23:55:22.754848    7680 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0408 23:55:22.755963    7680 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> 98642.pem in /etc/ssl/certs
	I0408 23:55:22.756036    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> /etc/ssl/certs/98642.pem
	I0408 23:55:22.767362    7680 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 23:55:22.787798    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem --> /etc/ssl/certs/98642.pem (1708 bytes)
	I0408 23:55:22.850437    7680 start.go:296] duration metric: took 5.1300551s for postStartSetup
	I0408 23:55:22.854121    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:55:25.002316    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:55:25.003357    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:25.003357    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:55:27.536875    7680 main.go:141] libmachine: [stdout =====>] : 192.168.126.102
	
	I0408 23:55:27.536875    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:27.538029    7680 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\config.json ...
	I0408 23:55:27.542072    7680 start.go:128] duration metric: took 2m8.5982678s to createHost
	I0408 23:55:27.542072    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:55:29.743128    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:55:29.743231    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:29.743308    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:55:32.295043    7680 main.go:141] libmachine: [stdout =====>] : 192.168.126.102
	
	I0408 23:55:32.295043    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:32.300948    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:55:32.301576    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.126.102 22 <nil> <nil>}
	I0408 23:55:32.301576    7680 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0408 23:55:32.437433    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744156532.464345148
	
	I0408 23:55:32.437433    7680 fix.go:216] guest clock: 1744156532.464345148
	I0408 23:55:32.437433    7680 fix.go:229] Guest: 2025-04-08 23:55:32.464345148 +0000 UTC Remote: 2025-04-08 23:55:27.5420727 +0000 UTC m=+561.904383401 (delta=4.922272448s)
	I0408 23:55:32.437433    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:55:34.590047    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:55:34.590047    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:34.590626    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:55:37.108963    7680 main.go:141] libmachine: [stdout =====>] : 192.168.126.102
	
	I0408 23:55:37.108963    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:37.115315    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:55:37.116105    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.126.102 22 <nil> <nil>}
	I0408 23:55:37.116105    7680 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1744156532
	I0408 23:55:37.256890    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr  8 23:55:32 UTC 2025
	
	I0408 23:55:37.256890    7680 fix.go:236] clock set: Tue Apr  8 23:55:32 UTC 2025
	 (err=<nil>)
	I0408 23:55:37.256890    7680 start.go:83] releasing machines lock for "ha-061400-m03", held for 2m18.3136521s
	I0408 23:55:37.257430    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:55:39.487177    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:55:39.487177    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:39.488139    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:55:42.071794    7680 main.go:141] libmachine: [stdout =====>] : 192.168.126.102
	
	I0408 23:55:42.071794    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:42.076775    7680 out.go:177] * Found network options:
	I0408 23:55:42.079895    7680 out.go:177]   - NO_PROXY=192.168.119.206,192.168.118.215
	W0408 23:55:42.082701    7680 proxy.go:119] fail to check proxy env: Error ip not in block
	W0408 23:55:42.082914    7680 proxy.go:119] fail to check proxy env: Error ip not in block
	I0408 23:55:42.085605    7680 out.go:177]   - NO_PROXY=192.168.119.206,192.168.118.215
	W0408 23:55:42.087693    7680 proxy.go:119] fail to check proxy env: Error ip not in block
	W0408 23:55:42.087693    7680 proxy.go:119] fail to check proxy env: Error ip not in block
	W0408 23:55:42.088664    7680 proxy.go:119] fail to check proxy env: Error ip not in block
	W0408 23:55:42.088664    7680 proxy.go:119] fail to check proxy env: Error ip not in block
	I0408 23:55:42.091656    7680 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0408 23:55:42.091656    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:55:42.102600    7680 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0408 23:55:42.102600    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:55:44.362207    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:55:44.362562    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:44.362640    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:55:44.365734    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:55:44.365734    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:44.365734    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:55:47.110534    7680 main.go:141] libmachine: [stdout =====>] : 192.168.126.102
	
	I0408 23:55:47.110534    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:47.110941    7680 sshutil.go:53] new ssh client: &{IP:192.168.126.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m03\id_rsa Username:docker}
	I0408 23:55:47.141073    7680 main.go:141] libmachine: [stdout =====>] : 192.168.126.102
	
	I0408 23:55:47.141073    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:47.142607    7680 sshutil.go:53] new ssh client: &{IP:192.168.126.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m03\id_rsa Username:docker}
	I0408 23:55:47.216473    7680 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1138054s)
	W0408 23:55:47.216534    7680 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 23:55:47.227597    7680 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 23:55:47.232526    7680 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.1408029s)
	W0408 23:55:47.232526    7680 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0408 23:55:47.258229    7680 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 23:55:47.258229    7680 start.go:495] detecting cgroup driver to use...
	I0408 23:55:47.258229    7680 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 23:55:47.305727    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	W0408 23:55:47.334063    7680 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0408 23:55:47.334063    7680 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0408 23:55:47.336031    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0408 23:55:47.354572    7680 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0408 23:55:47.365644    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0408 23:55:47.396104    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 23:55:47.432393    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0408 23:55:47.463773    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 23:55:47.496006    7680 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 23:55:47.530023    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0408 23:55:47.561127    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0408 23:55:47.592982    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0408 23:55:47.624422    7680 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 23:55:47.641534    7680 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 23:55:47.653155    7680 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 23:55:47.687820    7680 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 23:55:47.716717    7680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:55:47.903826    7680 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0408 23:55:47.933797    7680 start.go:495] detecting cgroup driver to use...
	I0408 23:55:47.944920    7680 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0408 23:55:47.979092    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 23:55:48.012782    7680 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 23:55:48.081536    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 23:55:48.118434    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0408 23:55:48.154411    7680 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0408 23:55:48.213702    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0408 23:55:48.238885    7680 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 23:55:48.283932    7680 ssh_runner.go:195] Run: which cri-dockerd
	I0408 23:55:48.301286    7680 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0408 23:55:48.317818    7680 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0408 23:55:48.361362    7680 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0408 23:55:48.557119    7680 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0408 23:55:48.733090    7680 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0408 23:55:48.733243    7680 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0408 23:55:48.780249    7680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:55:48.975751    7680 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0408 23:55:51.658168    7680 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6823806s)
	I0408 23:55:51.670914    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0408 23:55:51.708703    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0408 23:55:51.746226    7680 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0408 23:55:51.949698    7680 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0408 23:55:52.162175    7680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:55:52.356729    7680 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0408 23:55:52.399883    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0408 23:55:52.431318    7680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:55:52.626035    7680 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0408 23:55:52.734689    7680 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0408 23:55:52.748922    7680 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0408 23:55:52.758320    7680 start.go:563] Will wait 60s for crictl version
	I0408 23:55:52.769576    7680 ssh_runner.go:195] Run: which crictl
	I0408 23:55:52.787650    7680 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 23:55:52.844297    7680 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0408 23:55:52.854084    7680 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0408 23:55:52.902685    7680 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0408 23:55:52.940791    7680 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 27.4.0 ...
	I0408 23:55:52.943350    7680 out.go:177]   - env NO_PROXY=192.168.119.206
	I0408 23:55:52.946414    7680 out.go:177]   - env NO_PROXY=192.168.119.206,192.168.118.215
	I0408 23:55:52.949183    7680 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0408 23:55:52.953487    7680 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0408 23:55:52.953487    7680 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0408 23:55:52.953487    7680 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0408 23:55:52.953487    7680 ip.go:211] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:f4:da:75 Flags:up|broadcast|multicast|running}
	I0408 23:55:52.956870    7680 ip.go:214] interface addr: fe80::e8ab:9cc6:22b1:a5fc/64
	I0408 23:55:52.956870    7680 ip.go:214] interface addr: 192.168.112.1/20
	I0408 23:55:52.968035    7680 ssh_runner.go:195] Run: grep 192.168.112.1	host.minikube.internal$ /etc/hosts
	I0408 23:55:52.974846    7680 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.112.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 23:55:52.995720    7680 mustload.go:65] Loading cluster: ha-061400
	I0408 23:55:52.996445    7680 config.go:182] Loaded profile config "ha-061400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0408 23:55:52.996668    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:55:55.144479    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:55:55.144479    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:55.144479    7680 host.go:66] Checking if "ha-061400" exists ...
	I0408 23:55:55.144479    7680 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400 for IP: 192.168.126.102
	I0408 23:55:55.145440    7680 certs.go:194] generating shared ca certs ...
	I0408 23:55:55.145440    7680 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 23:55:55.145440    7680 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0408 23:55:55.145440    7680 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0408 23:55:55.146538    7680 certs.go:256] generating profile certs ...
	I0408 23:55:55.147667    7680 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\client.key
	I0408 23:55:55.147921    7680 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key.5d3ae75b
	I0408 23:55:55.148219    7680 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt.5d3ae75b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.119.206 192.168.118.215 192.168.126.102 192.168.127.254]
	I0408 23:55:55.661647    7680 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt.5d3ae75b ...
	I0408 23:55:55.661647    7680 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt.5d3ae75b: {Name:mka386ad3947e2e59ff49f1e94e7e8f217b7b995 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 23:55:55.663131    7680 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key.5d3ae75b ...
	I0408 23:55:55.663131    7680 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key.5d3ae75b: {Name:mk03f50f4c4bde286901c1be8ad3f0de4616726e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 23:55:55.664822    7680 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt.5d3ae75b -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt
	I0408 23:55:55.682546    7680 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key.5d3ae75b -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key
	I0408 23:55:55.685339    7680 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\proxy-client.key
	I0408 23:55:55.685339    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0408 23:55:55.685661    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0408 23:55:55.685834    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0408 23:55:55.686131    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0408 23:55:55.686364    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0408 23:55:55.686620    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0408 23:55:55.687181    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0408 23:55:55.687455    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0408 23:55:55.688255    7680 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9864.pem (1338 bytes)
	W0408 23:55:55.688660    7680 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9864_empty.pem, impossibly tiny 0 bytes
	I0408 23:55:55.688860    7680 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0408 23:55:55.689192    7680 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0408 23:55:55.689547    7680 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0408 23:55:55.690134    7680 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0408 23:55:55.690777    7680 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem (1708 bytes)
	I0408 23:55:55.690777    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> /usr/share/ca-certificates/98642.pem
	I0408 23:55:55.691536    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0408 23:55:55.691536    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9864.pem -> /usr/share/ca-certificates/9864.pem
	I0408 23:55:55.691536    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:55:57.814606    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:55:57.815654    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:57.815654    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:56:00.379511    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:56:00.380451    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:56:00.380931    7680 sshutil.go:53] new ssh client: &{IP:192.168.119.206 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400\id_rsa Username:docker}
	I0408 23:56:00.487255    7680 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0408 23:56:00.494969    7680 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0408 23:56:00.529444    7680 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0408 23:56:00.536758    7680 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0408 23:56:00.575701    7680 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0408 23:56:00.582344    7680 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0408 23:56:00.612946    7680 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0408 23:56:00.619591    7680 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0408 23:56:00.651838    7680 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0408 23:56:00.658771    7680 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0408 23:56:00.692967    7680 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0408 23:56:00.699870    7680 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0408 23:56:00.721084    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 23:56:00.773604    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 23:56:00.819439    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 23:56:00.863566    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0408 23:56:00.906332    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0408 23:56:00.951917    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 23:56:01.005278    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 23:56:01.051041    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0408 23:56:01.098835    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem --> /usr/share/ca-certificates/98642.pem (1708 bytes)
	I0408 23:56:01.157306    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 23:56:01.207222    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9864.pem --> /usr/share/ca-certificates/9864.pem (1338 bytes)
	I0408 23:56:01.256322    7680 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0408 23:56:01.289971    7680 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0408 23:56:01.319804    7680 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0408 23:56:01.349189    7680 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0408 23:56:01.378385    7680 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0408 23:56:01.410434    7680 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0408 23:56:01.439719    7680 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0408 23:56:01.483009    7680 ssh_runner.go:195] Run: openssl version
	I0408 23:56:01.502441    7680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9864.pem && ln -fs /usr/share/ca-certificates/9864.pem /etc/ssl/certs/9864.pem"
	I0408 23:56:01.534823    7680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9864.pem
	I0408 23:56:01.541068    7680 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 23:04 /usr/share/ca-certificates/9864.pem
	I0408 23:56:01.553388    7680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9864.pem
	I0408 23:56:01.572304    7680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9864.pem /etc/ssl/certs/51391683.0"
	I0408 23:56:01.601306    7680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98642.pem && ln -fs /usr/share/ca-certificates/98642.pem /etc/ssl/certs/98642.pem"
	I0408 23:56:01.630916    7680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98642.pem
	I0408 23:56:01.637024    7680 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 23:04 /usr/share/ca-certificates/98642.pem
	I0408 23:56:01.648525    7680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98642.pem
	I0408 23:56:01.668411    7680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/98642.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 23:56:01.700417    7680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 23:56:01.732921    7680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 23:56:01.739927    7680 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0408 23:56:01.751129    7680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 23:56:01.770935    7680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 23:56:01.801979    7680 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 23:56:01.808914    7680 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0408 23:56:01.809301    7680 kubeadm.go:934] updating node {m03 192.168.126.102 8443 v1.32.2 docker true true} ...
	I0408 23:56:01.809540    7680 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-061400-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.126.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-061400 Namespace:default APIServerHAVIP:192.168.127.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 23:56:01.809540    7680 kube-vip.go:115] generating kube-vip config ...
	I0408 23:56:01.821082    7680 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0408 23:56:01.847795    7680 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0408 23:56:01.847795    7680 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.127.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0408 23:56:01.861988    7680 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0408 23:56:01.878319    7680 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
	
	Initiating transfer...
	I0408 23:56:01.889582    7680 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
	I0408 23:56:01.909280    7680 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
	I0408 23:56:01.909280    7680 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256
	I0408 23:56:01.909280    7680 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256
	I0408 23:56:01.909280    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubectl -> /var/lib/minikube/binaries/v1.32.2/kubectl
	I0408 23:56:01.909280    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubeadm -> /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0408 23:56:01.924390    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 23:56:01.925019    7680 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl
	I0408 23:56:01.925019    7680 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0408 23:56:01.948111    7680 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubectl': No such file or directory
	I0408 23:56:01.948111    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubelet -> /var/lib/minikube/binaries/v1.32.2/kubelet
	I0408 23:56:01.949027    7680 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubeadm': No such file or directory
	I0408 23:56:01.949237    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubectl --> /var/lib/minikube/binaries/v1.32.2/kubectl (57323672 bytes)
	I0408 23:56:01.949495    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubeadm --> /var/lib/minikube/binaries/v1.32.2/kubeadm (70942872 bytes)
	I0408 23:56:01.964638    7680 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet
	I0408 23:56:02.030584    7680 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubelet': No such file or directory
	I0408 23:56:02.030862    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubelet --> /var/lib/minikube/binaries/v1.32.2/kubelet (77406468 bytes)
	I0408 23:56:03.257136    7680 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0408 23:56:03.277569    7680 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0408 23:56:03.318624    7680 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 23:56:03.351237    7680 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1443 bytes)
	I0408 23:56:03.393478    7680 ssh_runner.go:195] Run: grep 192.168.127.254	control-plane.minikube.internal$ /etc/hosts
	I0408 23:56:03.400235    7680 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.127.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 23:56:03.436854    7680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:56:03.659865    7680 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 23:56:03.695629    7680 host.go:66] Checking if "ha-061400" exists ...
	I0408 23:56:03.696618    7680 start.go:317] joinCluster: &{Name:ha-061400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-061400 Namespace:def
ault APIServerHAVIP:192.168.127.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.119.206 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.118.215 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.126.102 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspek
tor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 23:56:03.696873    7680 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0408 23:56:03.696943    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:56:05.853126    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:56:05.853126    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:56:05.854146    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:56:08.453032    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:56:08.453032    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:56:08.454153    7680 sshutil.go:53] new ssh client: &{IP:192.168.119.206 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400\id_rsa Username:docker}
	I0408 23:56:08.667548    7680 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm token create --print-join-command --ttl=0": (4.9706091s)
	I0408 23:56:08.667690    7680 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.126.102 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 23:56:08.667690    7680 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wlyb9l.uobras4z9tmnx4in --discovery-token-ca-cert-hash sha256:aa5a4dda055a1a4ae6c54f5bc7c6626b2903d2da5858116de66a68e5e1fbf334 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-061400-m03 --control-plane --apiserver-advertise-address=192.168.126.102 --apiserver-bind-port=8443"
	I0408 23:56:52.219978    7680 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wlyb9l.uobras4z9tmnx4in --discovery-token-ca-cert-hash sha256:aa5a4dda055a1a4ae6c54f5bc7c6626b2903d2da5858116de66a68e5e1fbf334 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-061400-m03 --control-plane --apiserver-advertise-address=192.168.126.102 --apiserver-bind-port=8443": (43.5517131s)
	I0408 23:56:52.219978    7680 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0408 23:56:53.086433    7680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-061400-m03 minikube.k8s.io/updated_at=2025_04_08T23_56_53_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=fd2f4c3eba2bd452b5997c855e28d0966165ba83 minikube.k8s.io/name=ha-061400 minikube.k8s.io/primary=false
	I0408 23:56:53.263171    7680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-061400-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0408 23:56:53.448007    7680 start.go:319] duration metric: took 49.7507324s to joinCluster
	I0408 23:56:53.448007    7680 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.126.102 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 23:56:53.448998    7680 config.go:182] Loaded profile config "ha-061400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0408 23:56:53.456984    7680 out.go:177] * Verifying Kubernetes components...
	I0408 23:56:53.476991    7680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:56:53.883922    7680 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 23:56:53.917566    7680 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0408 23:56:53.918376    7680 kapi.go:59] client config for ha-061400: &rest.Config{Host:"https://192.168.127.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-061400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-061400\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2809400), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0408 23:56:53.918536    7680 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.127.254:8443 with https://192.168.119.206:8443
	I0408 23:56:53.919410    7680 node_ready.go:35] waiting up to 6m0s for node "ha-061400-m03" to be "Ready" ...
	I0408 23:56:53.919410    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:56:53.919410    7680 round_trippers.go:476] Request Headers:
	I0408 23:56:53.919410    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:56:53.919410    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:56:53.937245    7680 round_trippers.go:581] Response Status: 200 OK in 17 milliseconds
	I0408 23:56:54.419596    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:56:54.419596    7680 round_trippers.go:476] Request Headers:
	I0408 23:56:54.419596    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:56:54.419596    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:56:54.425201    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:56:54.920110    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:56:54.920110    7680 round_trippers.go:476] Request Headers:
	I0408 23:56:54.920110    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:56:54.920110    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:56:54.927198    7680 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0408 23:56:55.419986    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:56:55.419986    7680 round_trippers.go:476] Request Headers:
	I0408 23:56:55.419986    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:56:55.419986    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:56:55.425244    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:56:55.920739    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:56:55.920739    7680 round_trippers.go:476] Request Headers:
	I0408 23:56:55.920739    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:56:55.920739    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:56:55.937063    7680 round_trippers.go:581] Response Status: 200 OK in 16 milliseconds
	I0408 23:56:55.938072    7680 node_ready.go:53] node "ha-061400-m03" has status "Ready":"False"
	I0408 23:56:56.421011    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:56:56.421011    7680 round_trippers.go:476] Request Headers:
	I0408 23:56:56.421011    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:56:56.421011    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:56:56.425602    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:56:56.921264    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:56:56.921264    7680 round_trippers.go:476] Request Headers:
	I0408 23:56:56.921264    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:56:56.921264    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:56:56.927331    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:56:57.420328    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:56:57.420379    7680 round_trippers.go:476] Request Headers:
	I0408 23:56:57.420417    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:56:57.420417    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:56:57.425902    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:56:57.920502    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:56:57.920562    7680 round_trippers.go:476] Request Headers:
	I0408 23:56:57.920643    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:56:57.920643    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:56:57.930022    7680 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0408 23:56:58.419817    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:56:58.420340    7680 round_trippers.go:476] Request Headers:
	I0408 23:56:58.420413    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:56:58.420413    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:56:58.426121    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:56:58.426121    7680 node_ready.go:53] node "ha-061400-m03" has status "Ready":"False"
	I0408 23:56:58.920862    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:56:58.920936    7680 round_trippers.go:476] Request Headers:
	I0408 23:56:58.920936    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:56:58.920936    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:56:58.926341    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:56:59.420438    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:56:59.420438    7680 round_trippers.go:476] Request Headers:
	I0408 23:56:59.420438    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:56:59.420438    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:56:59.427084    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:56:59.920571    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:56:59.920571    7680 round_trippers.go:476] Request Headers:
	I0408 23:56:59.920571    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:56:59.920571    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:56:59.926702    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:00.420097    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:00.420097    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:00.420097    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:00.420097    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:00.572568    7680 round_trippers.go:581] Response Status: 200 OK in 152 milliseconds
	I0408 23:57:00.573125    7680 node_ready.go:53] node "ha-061400-m03" has status "Ready":"False"
	I0408 23:57:00.920754    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:00.920754    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:00.920754    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:00.920754    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:00.926672    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:01.420249    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:01.420386    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:01.420386    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:01.420386    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:01.425783    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:01.920600    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:01.920731    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:01.920731    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:01.920731    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:01.926710    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:02.420502    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:02.420502    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:02.420615    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:02.420615    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:02.427303    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:57:02.920160    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:02.920160    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:02.920160    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:02.920160    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:02.926713    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:57:02.927250    7680 node_ready.go:53] node "ha-061400-m03" has status "Ready":"False"
	I0408 23:57:03.420542    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:03.420542    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:03.420542    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:03.420542    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:03.434502    7680 round_trippers.go:581] Response Status: 200 OK in 13 milliseconds
	I0408 23:57:03.920777    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:03.920882    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:03.920882    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:03.920882    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:03.925445    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:57:04.420677    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:04.420677    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:04.420677    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:04.420677    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:04.425711    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:04.920241    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:04.920295    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:04.920295    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:04.920295    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:04.927645    7680 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0408 23:57:04.928186    7680 node_ready.go:53] node "ha-061400-m03" has status "Ready":"False"
	I0408 23:57:05.420047    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:05.420047    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:05.420047    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:05.420047    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:05.425500    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:05.920775    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:05.920775    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:05.920775    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:05.920775    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:05.925877    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:06.420598    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:06.420598    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:06.420598    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:06.420707    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:06.431386    7680 round_trippers.go:581] Response Status: 200 OK in 10 milliseconds
	I0408 23:57:06.920061    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:06.920547    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:06.920547    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:06.920547    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:06.929925    7680 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0408 23:57:06.930323    7680 node_ready.go:53] node "ha-061400-m03" has status "Ready":"False"
	I0408 23:57:07.419917    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:07.419917    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:07.419917    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:07.419917    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:07.425727    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:07.921141    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:07.921141    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:07.921141    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:07.921141    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:07.925877    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:57:08.419722    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:08.419722    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:08.419722    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:08.419722    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:08.424248    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:57:08.919832    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:08.919832    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:08.919832    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:08.919832    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:08.925015    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:09.419913    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:09.419913    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:09.419913    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:09.419913    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:09.425452    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:09.425452    7680 node_ready.go:53] node "ha-061400-m03" has status "Ready":"False"
	I0408 23:57:09.920170    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:09.920239    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:09.920239    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:09.920239    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:09.926353    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:57:10.420567    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:10.420567    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:10.420567    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:10.420567    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:10.425909    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:10.919865    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:10.919865    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:10.919865    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:10.919865    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:10.926214    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:57:11.420519    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:11.420519    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:11.420519    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:11.420519    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:11.425254    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:57:11.425789    7680 node_ready.go:53] node "ha-061400-m03" has status "Ready":"False"
	I0408 23:57:11.920085    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:11.920085    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:11.920085    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:11.920085    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:11.927917    7680 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0408 23:57:12.421250    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:12.421327    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:12.421327    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:12.421327    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:12.426507    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:12.920584    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:12.920584    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:12.920584    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:12.920584    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:12.926532    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:12.927469    7680 node_ready.go:49] node "ha-061400-m03" has status "Ready":"True"
	I0408 23:57:12.927607    7680 node_ready.go:38] duration metric: took 19.0079464s for node "ha-061400-m03" to be "Ready" ...
	I0408 23:57:12.927607    7680 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 23:57:12.927737    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods
	I0408 23:57:12.927737    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:12.927809    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:12.927809    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:12.932914    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:12.937232    7680 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-rzk8c" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:12.937343    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-rzk8c
	I0408 23:57:12.937399    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:12.937399    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:12.937482    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:12.941345    7680 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0408 23:57:12.942405    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:57:12.942555    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:12.942555    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:12.942555    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:12.955780    7680 round_trippers.go:581] Response Status: 200 OK in 13 milliseconds
	I0408 23:57:12.956813    7680 pod_ready.go:93] pod "coredns-668d6bf9bc-rzk8c" in "kube-system" namespace has status "Ready":"True"
	I0408 23:57:12.957193    7680 pod_ready.go:82] duration metric: took 19.9613ms for pod "coredns-668d6bf9bc-rzk8c" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:12.957276    7680 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-scvcr" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:12.957459    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-scvcr
	I0408 23:57:12.957459    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:12.957511    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:12.957511    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:12.961384    7680 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0408 23:57:12.961807    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:57:12.961807    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:12.961887    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:12.961887    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:12.965407    7680 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0408 23:57:12.965865    7680 pod_ready.go:93] pod "coredns-668d6bf9bc-scvcr" in "kube-system" namespace has status "Ready":"True"
	I0408 23:57:12.965865    7680 pod_ready.go:82] duration metric: took 8.5892ms for pod "coredns-668d6bf9bc-scvcr" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:12.965921    7680 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-061400" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:12.966110    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/etcd-ha-061400
	I0408 23:57:12.966170    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:12.966170    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:12.966225    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:12.970445    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:57:12.971524    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:57:12.971594    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:12.971594    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:12.971634    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:12.976502    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:57:12.976502    7680 pod_ready.go:93] pod "etcd-ha-061400" in "kube-system" namespace has status "Ready":"True"
	I0408 23:57:12.976502    7680 pod_ready.go:82] duration metric: took 10.5815ms for pod "etcd-ha-061400" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:12.976502    7680 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-061400-m02" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:12.976502    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/etcd-ha-061400-m02
	I0408 23:57:12.977744    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:12.977792    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:12.977792    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:12.981476    7680 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0408 23:57:12.982410    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:57:12.982410    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:12.982410    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:12.982479    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:12.991574    7680 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0408 23:57:12.992269    7680 pod_ready.go:93] pod "etcd-ha-061400-m02" in "kube-system" namespace has status "Ready":"True"
	I0408 23:57:12.992316    7680 pod_ready.go:82] duration metric: took 15.8137ms for pod "etcd-ha-061400-m02" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:12.992316    7680 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-061400-m03" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:13.120747    7680 request.go:661] Waited for 128.3664ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/etcd-ha-061400-m03
	I0408 23:57:13.120747    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/etcd-ha-061400-m03
	I0408 23:57:13.120747    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:13.120747    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:13.120747    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:13.126457    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:13.320888    7680 request.go:661] Waited for 193.668ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:13.320888    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:13.320888    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:13.320888    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:13.320888    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:13.326530    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:13.326631    7680 pod_ready.go:93] pod "etcd-ha-061400-m03" in "kube-system" namespace has status "Ready":"True"
	I0408 23:57:13.326631    7680 pod_ready.go:82] duration metric: took 334.31ms for pod "etcd-ha-061400-m03" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:13.327189    7680 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-061400" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:13.520791    7680 request.go:661] Waited for 193.4978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-061400
	I0408 23:57:13.520791    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-061400
	I0408 23:57:13.520791    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:13.520791    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:13.520791    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:13.526782    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:13.720583    7680 request.go:661] Waited for 192.7717ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:57:13.721076    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:57:13.721076    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:13.721076    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:13.721076    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:13.727209    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:57:13.727209    7680 pod_ready.go:93] pod "kube-apiserver-ha-061400" in "kube-system" namespace has status "Ready":"True"
	I0408 23:57:13.727209    7680 pod_ready.go:82] duration metric: took 400.0147ms for pod "kube-apiserver-ha-061400" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:13.727209    7680 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-061400-m02" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:13.920992    7680 request.go:661] Waited for 193.7812ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-061400-m02
	I0408 23:57:13.920992    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-061400-m02
	I0408 23:57:13.920992    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:13.920992    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:13.920992    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:13.926693    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:14.121239    7680 request.go:661] Waited for 193.8932ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:57:14.121239    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:57:14.121239    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:14.121239    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:14.121239    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:14.127562    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:57:14.128204    7680 pod_ready.go:93] pod "kube-apiserver-ha-061400-m02" in "kube-system" namespace has status "Ready":"True"
	I0408 23:57:14.128204    7680 pod_ready.go:82] duration metric: took 400.9899ms for pod "kube-apiserver-ha-061400-m02" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:14.128204    7680 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-061400-m03" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:14.320362    7680 request.go:661] Waited for 191.947ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-061400-m03
	I0408 23:57:14.320362    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-061400-m03
	I0408 23:57:14.320967    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:14.320967    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:14.320967    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:14.326866    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:14.521010    7680 request.go:661] Waited for 193.5706ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:14.521010    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:14.521010    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:14.521010    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:14.521010    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:14.526489    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:14.526555    7680 pod_ready.go:93] pod "kube-apiserver-ha-061400-m03" in "kube-system" namespace has status "Ready":"True"
	I0408 23:57:14.526555    7680 pod_ready.go:82] duration metric: took 398.346ms for pod "kube-apiserver-ha-061400-m03" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:14.526555    7680 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-061400" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:14.720922    7680 request.go:661] Waited for 194.3639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-061400
	I0408 23:57:14.720922    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-061400
	I0408 23:57:14.720922    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:14.720922    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:14.720922    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:14.727055    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:57:14.920322    7680 request.go:661] Waited for 193.2637ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:57:14.920613    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:57:14.920613    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:14.920613    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:14.920613    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:14.926054    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:14.926303    7680 pod_ready.go:93] pod "kube-controller-manager-ha-061400" in "kube-system" namespace has status "Ready":"True"
	I0408 23:57:14.926303    7680 pod_ready.go:82] duration metric: took 399.743ms for pod "kube-controller-manager-ha-061400" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:14.926303    7680 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-061400-m02" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:15.120682    7680 request.go:661] Waited for 194.3764ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-061400-m02
	I0408 23:57:15.121180    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-061400-m02
	I0408 23:57:15.121180    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:15.121180    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:15.121180    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:15.131104    7680 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0408 23:57:15.321096    7680 request.go:661] Waited for 189.4134ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:57:15.321096    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:57:15.321565    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:15.321565    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:15.321565    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:15.327292    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:57:15.327580    7680 pod_ready.go:93] pod "kube-controller-manager-ha-061400-m02" in "kube-system" namespace has status "Ready":"True"
	I0408 23:57:15.327660    7680 pod_ready.go:82] duration metric: took 401.3517ms for pod "kube-controller-manager-ha-061400-m02" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:15.327660    7680 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-061400-m03" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:15.521156    7680 request.go:661] Waited for 193.3592ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-061400-m03
	I0408 23:57:15.521596    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-061400-m03
	I0408 23:57:15.521596    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:15.521596    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:15.521596    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:15.526703    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:15.721299    7680 request.go:661] Waited for 194.0271ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:15.721299    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:15.721299    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:15.721299    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:15.721299    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:15.727229    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:15.727305    7680 pod_ready.go:93] pod "kube-controller-manager-ha-061400-m03" in "kube-system" namespace has status "Ready":"True"
	I0408 23:57:15.727858    7680 pod_ready.go:82] duration metric: took 399.6396ms for pod "kube-controller-manager-ha-061400-m03" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:15.727858    7680 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lr9jb" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:15.920869    7680 request.go:661] Waited for 193.0086ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lr9jb
	I0408 23:57:15.920869    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lr9jb
	I0408 23:57:15.920869    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:15.920869    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:15.920869    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:15.925982    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:16.120766    7680 request.go:661] Waited for 193.5359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:57:16.121267    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:57:16.121297    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:16.121297    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:16.121297    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:16.129550    7680 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0408 23:57:16.129550    7680 pod_ready.go:93] pod "kube-proxy-lr9jb" in "kube-system" namespace has status "Ready":"True"
	I0408 23:57:16.129550    7680 pod_ready.go:82] duration metric: took 401.687ms for pod "kube-proxy-lr9jb" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:16.129550    7680 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nkwqr" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:16.320926    7680 request.go:661] Waited for 191.3731ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nkwqr
	I0408 23:57:16.320926    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nkwqr
	I0408 23:57:16.320926    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:16.320926    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:16.320926    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:16.336757    7680 round_trippers.go:581] Response Status: 200 OK in 15 milliseconds
	I0408 23:57:16.520832    7680 request.go:661] Waited for 183.4389ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:57:16.520832    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:57:16.520832    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:16.520832    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:16.520832    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:16.526311    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:16.526951    7680 pod_ready.go:93] pod "kube-proxy-nkwqr" in "kube-system" namespace has status "Ready":"True"
	I0408 23:57:16.526951    7680 pod_ready.go:82] duration metric: took 397.3952ms for pod "kube-proxy-nkwqr" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:16.527069    7680 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rl7bv" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:16.720489    7680 request.go:661] Waited for 193.4175ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rl7bv
	I0408 23:57:16.720943    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rl7bv
	I0408 23:57:16.720943    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:16.720943    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:16.720943    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:16.726569    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:16.920477    7680 request.go:661] Waited for 193.3957ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:16.920477    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:16.920477    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:16.920477    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:16.920477    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:16.925713    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:16.926947    7680 pod_ready.go:93] pod "kube-proxy-rl7bv" in "kube-system" namespace has status "Ready":"True"
	I0408 23:57:16.926947    7680 pod_ready.go:82] duration metric: took 399.8726ms for pod "kube-proxy-rl7bv" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:16.926947    7680 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-061400" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:17.120839    7680 request.go:661] Waited for 193.8895ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-061400
	I0408 23:57:17.121442    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-061400
	I0408 23:57:17.121535    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:17.121535    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:17.121535    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:17.128306    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:57:17.320560    7680 request.go:661] Waited for 191.7976ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:57:17.320560    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:57:17.320560    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:17.320560    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:17.320560    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:17.326905    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:57:17.326905    7680 pod_ready.go:93] pod "kube-scheduler-ha-061400" in "kube-system" namespace has status "Ready":"True"
	I0408 23:57:17.327464    7680 pod_ready.go:82] duration metric: took 400.5121ms for pod "kube-scheduler-ha-061400" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:17.327565    7680 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-061400-m02" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:17.520747    7680 request.go:661] Waited for 193.1797ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-061400-m02
	I0408 23:57:17.521181    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-061400-m02
	I0408 23:57:17.521270    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:17.521270    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:17.521270    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:17.530002    7680 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0408 23:57:17.720284    7680 request.go:661] Waited for 190.2794ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:57:17.720284    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:57:17.720284    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:17.720284    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:17.720284    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:17.725244    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:57:17.725244    7680 pod_ready.go:93] pod "kube-scheduler-ha-061400-m02" in "kube-system" namespace has status "Ready":"True"
	I0408 23:57:17.725244    7680 pod_ready.go:82] duration metric: took 397.6735ms for pod "kube-scheduler-ha-061400-m02" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:17.725244    7680 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-061400-m03" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:17.920794    7680 request.go:661] Waited for 194.7756ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-061400-m03
	I0408 23:57:17.920794    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-061400-m03
	I0408 23:57:17.920794    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:17.921479    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:17.921479    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:17.927343    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:18.120648    7680 request.go:661] Waited for 192.7748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:18.120648    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:18.120990    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:18.120990    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:18.120990    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:18.138457    7680 round_trippers.go:581] Response Status: 200 OK in 17 milliseconds
	I0408 23:57:18.138997    7680 pod_ready.go:93] pod "kube-scheduler-ha-061400-m03" in "kube-system" namespace has status "Ready":"True"
	I0408 23:57:18.138997    7680 pod_ready.go:82] duration metric: took 413.7477ms for pod "kube-scheduler-ha-061400-m03" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:18.138997    7680 pod_ready.go:39] duration metric: took 5.2113205s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 23:57:18.138997    7680 api_server.go:52] waiting for apiserver process to appear ...
	I0408 23:57:18.151004    7680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 23:57:18.182192    7680 api_server.go:72] duration metric: took 24.7338589s to wait for apiserver process to appear ...
	I0408 23:57:18.182192    7680 api_server.go:88] waiting for apiserver healthz status ...
	I0408 23:57:18.183766    7680 api_server.go:253] Checking apiserver healthz at https://192.168.119.206:8443/healthz ...
	I0408 23:57:18.193479    7680 api_server.go:279] https://192.168.119.206:8443/healthz returned 200:
	ok
	I0408 23:57:18.193479    7680 round_trippers.go:470] GET https://192.168.119.206:8443/version
	I0408 23:57:18.193479    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:18.193479    7680 round_trippers.go:480]     Accept: application/json, */*
	I0408 23:57:18.193479    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:18.199841    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:57:18.199841    7680 api_server.go:141] control plane version: v1.32.2
	I0408 23:57:18.199841    7680 api_server.go:131] duration metric: took 16.0745ms to wait for apiserver health ...
	I0408 23:57:18.199841    7680 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 23:57:18.321193    7680 request.go:661] Waited for 120.6978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods
	I0408 23:57:18.321519    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods
	I0408 23:57:18.321519    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:18.321519    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:18.321519    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:18.330127    7680 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0408 23:57:18.333349    7680 system_pods.go:59] 24 kube-system pods found
	I0408 23:57:18.333349    7680 system_pods.go:61] "coredns-668d6bf9bc-rzk8c" [18f6703f-34ad-403f-b86d-9a8f3dc927a0] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "coredns-668d6bf9bc-scvcr" [952efdd7-d201-4747-833a-59e05925e74f] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "etcd-ha-061400" [429dfaa4-c9bf-47dc-81f9-ab33ad3acee4] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "etcd-ha-061400-m02" [5fa6b2de-e3e8-4c95-84e3-3e344ce6a56f] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "etcd-ha-061400-m03" [9cfea750-78b9-4595-8046-cca9379d4651] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "kindnet-44mc6" [a8a857e1-90f1-4346-97a7-0b083352aeda] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "kindnet-7mvqz" [3fcc4494-1878-48e2-97ee-f76dcff55c29] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "kindnet-d8bcw" [020b9216-ff50-4ac1-9c3e-d6b836c42ecf] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "kube-apiserver-ha-061400" [488f7097-53fd-4754-aa77-78aed24b3494] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "kube-apiserver-ha-061400-m02" [1f83551d-39c0-4485-b4a6-d44c3e58b435] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "kube-apiserver-ha-061400-m03" [69794402-115c-4ba3-a9e9-35d1f59b5a46] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "kube-controller-manager-ha-061400" [28c1163e-e283-49b0-bab7-b91d1b73ab27] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "kube-controller-manager-ha-061400-m02" [89ab7c55-91a9-452b-9c0e-3673bf608abc] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "kube-controller-manager-ha-061400-m03" [d4a82363-d392-4806-bdec-5e370db14a21] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "kube-proxy-lr9jb" [4ea29fd2-fb54-44d7-a558-a272fd4f05f5] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "kube-proxy-nkwqr" [20f509f0-ca9e-4464-b87f-e5d226ce9e3c] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "kube-proxy-rl7bv" [d928bc40-4dcd-47d2-9c7a-b41237c0b070] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "kube-scheduler-ha-061400" [b16bc563-a6aa-49d3-b7c4-74b5827bb66e] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "kube-scheduler-ha-061400-m02" [e9a386c4-fe99-49d0-bff9-d434ba81d735] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "kube-scheduler-ha-061400-m03" [db70e2f5-39bf-42ee-826f-6643dc5fc79a] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "kube-vip-ha-061400" [b677e4c1-39bf-459c-a33c-ecfce817e2a5] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "kube-vip-ha-061400-m02" [2a30dc1d-3208-468f-8614-a469337f5ac2] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "kube-vip-ha-061400-m03" [e3b9b5ad-7566-45c6-9a8f-2be704a0b6c0] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "storage-provisioner" [bd11797d-cec8-419e-b7e9-1d537d9a7378] Running
	I0408 23:57:18.333349    7680 system_pods.go:74] duration metric: took 133.5058ms to wait for pod list to return data ...
	I0408 23:57:18.333349    7680 default_sa.go:34] waiting for default service account to be created ...
	I0408 23:57:18.521896    7680 request.go:661] Waited for 188.5453ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/default/serviceaccounts
	I0408 23:57:18.522284    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/default/serviceaccounts
	I0408 23:57:18.522284    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:18.522284    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:18.522284    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:18.527858    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:18.528198    7680 default_sa.go:45] found service account: "default"
	I0408 23:57:18.528313    7680 default_sa.go:55] duration metric: took 194.9048ms for default service account to be created ...
	I0408 23:57:18.528369    7680 system_pods.go:116] waiting for k8s-apps to be running ...
	I0408 23:57:18.720379    7680 request.go:661] Waited for 191.958ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods
	I0408 23:57:18.720379    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods
	I0408 23:57:18.720379    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:18.720379    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:18.720379    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:18.725422    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:18.729479    7680 system_pods.go:86] 24 kube-system pods found
	I0408 23:57:18.729545    7680 system_pods.go:89] "coredns-668d6bf9bc-rzk8c" [18f6703f-34ad-403f-b86d-9a8f3dc927a0] Running
	I0408 23:57:18.729545    7680 system_pods.go:89] "coredns-668d6bf9bc-scvcr" [952efdd7-d201-4747-833a-59e05925e74f] Running
	I0408 23:57:18.729545    7680 system_pods.go:89] "etcd-ha-061400" [429dfaa4-c9bf-47dc-81f9-ab33ad3acee4] Running
	I0408 23:57:18.729545    7680 system_pods.go:89] "etcd-ha-061400-m02" [5fa6b2de-e3e8-4c95-84e3-3e344ce6a56f] Running
	I0408 23:57:18.729545    7680 system_pods.go:89] "etcd-ha-061400-m03" [9cfea750-78b9-4595-8046-cca9379d4651] Running
	I0408 23:57:18.729545    7680 system_pods.go:89] "kindnet-44mc6" [a8a857e1-90f1-4346-97a7-0b083352aeda] Running
	I0408 23:57:18.729545    7680 system_pods.go:89] "kindnet-7mvqz" [3fcc4494-1878-48e2-97ee-f76dcff55c29] Running
	I0408 23:57:18.729545    7680 system_pods.go:89] "kindnet-d8bcw" [020b9216-ff50-4ac1-9c3e-d6b836c42ecf] Running
	I0408 23:57:18.729545    7680 system_pods.go:89] "kube-apiserver-ha-061400" [488f7097-53fd-4754-aa77-78aed24b3494] Running
	I0408 23:57:18.729630    7680 system_pods.go:89] "kube-apiserver-ha-061400-m02" [1f83551d-39c0-4485-b4a6-d44c3e58b435] Running
	I0408 23:57:18.729630    7680 system_pods.go:89] "kube-apiserver-ha-061400-m03" [69794402-115c-4ba3-a9e9-35d1f59b5a46] Running
	I0408 23:57:18.729630    7680 system_pods.go:89] "kube-controller-manager-ha-061400" [28c1163e-e283-49b0-bab7-b91d1b73ab27] Running
	I0408 23:57:18.729630    7680 system_pods.go:89] "kube-controller-manager-ha-061400-m02" [89ab7c55-91a9-452b-9c0e-3673bf608abc] Running
	I0408 23:57:18.729630    7680 system_pods.go:89] "kube-controller-manager-ha-061400-m03" [d4a82363-d392-4806-bdec-5e370db14a21] Running
	I0408 23:57:18.729630    7680 system_pods.go:89] "kube-proxy-lr9jb" [4ea29fd2-fb54-44d7-a558-a272fd4f05f5] Running
	I0408 23:57:18.729711    7680 system_pods.go:89] "kube-proxy-nkwqr" [20f509f0-ca9e-4464-b87f-e5d226ce9e3c] Running
	I0408 23:57:18.729711    7680 system_pods.go:89] "kube-proxy-rl7bv" [d928bc40-4dcd-47d2-9c7a-b41237c0b070] Running
	I0408 23:57:18.729711    7680 system_pods.go:89] "kube-scheduler-ha-061400" [b16bc563-a6aa-49d3-b7c4-74b5827bb66e] Running
	I0408 23:57:18.729711    7680 system_pods.go:89] "kube-scheduler-ha-061400-m02" [e9a386c4-fe99-49d0-bff9-d434ba81d735] Running
	I0408 23:57:18.729711    7680 system_pods.go:89] "kube-scheduler-ha-061400-m03" [db70e2f5-39bf-42ee-826f-6643dc5fc79a] Running
	I0408 23:57:18.729711    7680 system_pods.go:89] "kube-vip-ha-061400" [b677e4c1-39bf-459c-a33c-ecfce817e2a5] Running
	I0408 23:57:18.729711    7680 system_pods.go:89] "kube-vip-ha-061400-m02" [2a30dc1d-3208-468f-8614-a469337f5ac2] Running
	I0408 23:57:18.729711    7680 system_pods.go:89] "kube-vip-ha-061400-m03" [e3b9b5ad-7566-45c6-9a8f-2be704a0b6c0] Running
	I0408 23:57:18.729780    7680 system_pods.go:89] "storage-provisioner" [bd11797d-cec8-419e-b7e9-1d537d9a7378] Running
	I0408 23:57:18.729780    7680 system_pods.go:126] duration metric: took 201.3859ms to wait for k8s-apps to be running ...
	I0408 23:57:18.729780    7680 system_svc.go:44] waiting for kubelet service to be running ....
	I0408 23:57:18.741726    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 23:57:18.769079    7680 system_svc.go:56] duration metric: took 39.2989ms WaitForService to wait for kubelet
	I0408 23:57:18.769079    7680 kubeadm.go:582] duration metric: took 25.320738s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 23:57:18.769164    7680 node_conditions.go:102] verifying NodePressure condition ...
	I0408 23:57:18.920700    7680 request.go:661] Waited for 151.3877ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes
	I0408 23:57:18.921257    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes
	I0408 23:57:18.921257    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:18.921257    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:18.921257    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:18.927755    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:57:18.928537    7680 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 23:57:18.928610    7680 node_conditions.go:123] node cpu capacity is 2
	I0408 23:57:18.928610    7680 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 23:57:18.928610    7680 node_conditions.go:123] node cpu capacity is 2
	I0408 23:57:18.928610    7680 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 23:57:18.928610    7680 node_conditions.go:123] node cpu capacity is 2
	I0408 23:57:18.928610    7680 node_conditions.go:105] duration metric: took 159.4445ms to run NodePressure ...
	I0408 23:57:18.928670    7680 start.go:241] waiting for startup goroutines ...
	I0408 23:57:18.928748    7680 start.go:255] writing updated cluster config ...
	I0408 23:57:18.940095    7680 ssh_runner.go:195] Run: rm -f paused
	I0408 23:57:19.092799    7680 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0408 23:57:19.100210    7680 out.go:177] * Done! kubectl is now configured to use "ha-061400" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 08 23:49:40 ha-061400 cri-dockerd[1341]: time="2025-04-08T23:49:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/053f18a3f15a430b334c18647767c96e5c9aefa0d49ff7988c41dd94ebb1ef84/resolv.conf as [nameserver 192.168.112.1]"
	Apr 08 23:49:40 ha-061400 cri-dockerd[1341]: time="2025-04-08T23:49:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b970cca1abdffe883ab712bc2a9ff00c9e99300ea86bd493b95b8002eb151801/resolv.conf as [nameserver 192.168.112.1]"
	Apr 08 23:49:40 ha-061400 cri-dockerd[1341]: time="2025-04-08T23:49:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1044fd2112454762d545724c0d174d35c038a14ed69086f711c54fa6c5f2007c/resolv.conf as [nameserver 192.168.112.1]"
	Apr 08 23:49:40 ha-061400 dockerd[1451]: time="2025-04-08T23:49:40.388606126Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:49:40 ha-061400 dockerd[1451]: time="2025-04-08T23:49:40.388688926Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:49:40 ha-061400 dockerd[1451]: time="2025-04-08T23:49:40.388706026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:49:40 ha-061400 dockerd[1451]: time="2025-04-08T23:49:40.388956927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:49:40 ha-061400 dockerd[1451]: time="2025-04-08T23:49:40.664557835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:49:40 ha-061400 dockerd[1451]: time="2025-04-08T23:49:40.664810536Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:49:40 ha-061400 dockerd[1451]: time="2025-04-08T23:49:40.665069637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:49:40 ha-061400 dockerd[1451]: time="2025-04-08T23:49:40.665665939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:49:40 ha-061400 dockerd[1451]: time="2025-04-08T23:49:40.742539220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:49:40 ha-061400 dockerd[1451]: time="2025-04-08T23:49:40.742700221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:49:40 ha-061400 dockerd[1451]: time="2025-04-08T23:49:40.742912522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:49:40 ha-061400 dockerd[1451]: time="2025-04-08T23:49:40.743810825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:57:58 ha-061400 dockerd[1451]: time="2025-04-08T23:57:58.164652902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:57:58 ha-061400 dockerd[1451]: time="2025-04-08T23:57:58.164937104Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:57:58 ha-061400 dockerd[1451]: time="2025-04-08T23:57:58.164976104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:57:58 ha-061400 dockerd[1451]: time="2025-04-08T23:57:58.166031911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:57:58 ha-061400 cri-dockerd[1341]: time="2025-04-08T23:57:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/848887aaa74b44a80c763209316ef88ccb828e4339ad2d5c404a66fcf26117af/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 08 23:58:00 ha-061400 cri-dockerd[1341]: time="2025-04-08T23:58:00Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 08 23:58:00 ha-061400 dockerd[1451]: time="2025-04-08T23:58:00.367046503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:58:00 ha-061400 dockerd[1451]: time="2025-04-08T23:58:00.367216405Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:58:00 ha-061400 dockerd[1451]: time="2025-04-08T23:58:00.367862013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:58:00 ha-061400 dockerd[1451]: time="2025-04-08T23:58:00.368496120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a9e84d4448026       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   848887aaa74b4       busybox-58667487b6-8xfwm
	fa7952995b810       c69fa2e9cbf5f                                                                                         9 minutes ago        Running             coredns                   0                   1044fd2112454       coredns-668d6bf9bc-rzk8c
	ac90a50565e40       c69fa2e9cbf5f                                                                                         9 minutes ago        Running             coredns                   0                   b970cca1abdff       coredns-668d6bf9bc-scvcr
	cb7647ddff9e9       6e38f40d628db                                                                                         9 minutes ago        Running             storage-provisioner       0                   053f18a3f15a4       storage-provisioner
	f72554e173731       kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495              9 minutes ago        Running             kindnet-cni               0                   5f2e5e183eeaa       kindnet-44mc6
	231ada3088443       f1332858868e1                                                                                         9 minutes ago        Running             kube-proxy                0                   cfd0b3b4da1c5       kube-proxy-lr9jb
	697735ce06c27       ghcr.io/kube-vip/kube-vip@sha256:e01c90bcdd3eb37a46aaf04f6c86cca3e66dd0db7a231f3c8e8aa105635c158a     9 minutes ago        Running             kube-vip                  0                   70109836f70c1       kube-vip-ha-061400
	cd88701b3604f       b6a454c5a800d                                                                                         10 minutes ago       Running             kube-controller-manager   0                   abf0986ea8b52       kube-controller-manager-ha-061400
	73e54c2230f8c       a9e7e6b294baf                                                                                         10 minutes ago       Running             etcd                      0                   6f42583efa51d       etcd-ha-061400
	327b3e42a6dbb       d8e673e7c9983                                                                                         10 minutes ago       Running             kube-scheduler            0                   1dd3407ceda46       kube-scheduler-ha-061400
	f7ba71d60c8f5       85b7a174738ba                                                                                         10 minutes ago       Running             kube-apiserver            0                   a8b448e178628       kube-apiserver-ha-061400
	
	
	==> coredns [ac90a50565e4] <==
	[INFO] 10.244.0.4:58231 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000371404s
	[INFO] 10.244.0.4:50702 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000219003s
	[INFO] 10.244.2.3:55353 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000259603s
	[INFO] 10.244.2.3:42915 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000186002s
	[INFO] 10.244.2.3:48660 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000365805s
	[INFO] 10.244.2.3:36593 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000163802s
	[INFO] 10.244.2.2:52254 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000150102s
	[INFO] 10.244.2.2:46234 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000192702s
	[INFO] 10.244.2.2:60163 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080101s
	[INFO] 10.244.0.4:43940 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000284903s
	[INFO] 10.244.0.4:40825 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000481805s
	[INFO] 10.244.2.3:51991 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191003s
	[INFO] 10.244.2.3:43007 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142702s
	[INFO] 10.244.2.3:37819 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067201s
	[INFO] 10.244.2.2:46496 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000220903s
	[INFO] 10.244.2.2:34047 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000330104s
	[INFO] 10.244.2.2:45982 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000131101s
	[INFO] 10.244.0.4:58806 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000303904s
	[INFO] 10.244.0.4:55429 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000246702s
	[INFO] 10.244.0.4:55415 - 5 "PTR IN 1.112.168.192.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd 106 0.000183102s
	[INFO] 10.244.2.3:41378 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000242503s
	[INFO] 10.244.2.3:42150 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000210902s
	[INFO] 10.244.2.3:48171 - 5 "PTR IN 1.112.168.192.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd 106 0.000087501s
	[INFO] 10.244.2.2:36492 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000769709s
	[INFO] 10.244.2.2:60128 - 5 "PTR IN 1.112.168.192.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd 106 0.000074201s
	
	
	==> coredns [fa7952995b81] <==
	[INFO] 10.244.0.4:53950 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.182613825s
	[INFO] 10.244.2.3:45134 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.001231715s
	[INFO] 10.244.2.2:48114 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160202s
	[INFO] 10.244.2.2:41318 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000343204s
	[INFO] 10.244.2.2:42769 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.32989453s
	[INFO] 10.244.0.4:55193 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000239002s
	[INFO] 10.244.0.4:35997 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.024457283s
	[INFO] 10.244.0.4:45383 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000183302s
	[INFO] 10.244.2.3:48577 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000093201s
	[INFO] 10.244.2.3:41996 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000122501s
	[INFO] 10.244.2.3:52550 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.051362995s
	[INFO] 10.244.2.3:35001 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000235703s
	[INFO] 10.244.2.2:41847 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000175303s
	[INFO] 10.244.2.2:41365 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000095201s
	[INFO] 10.244.2.2:57717 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000145002s
	[INFO] 10.244.2.2:58572 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000212003s
	[INFO] 10.244.2.2:59561 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000065801s
	[INFO] 10.244.0.4:37240 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000140902s
	[INFO] 10.244.0.4:45692 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000197203s
	[INFO] 10.244.2.3:32983 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000142402s
	[INFO] 10.244.2.2:43492 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125701s
	[INFO] 10.244.0.4:50466 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000134702s
	[INFO] 10.244.2.3:46680 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000414104s
	[INFO] 10.244.2.2:53559 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000228303s
	[INFO] 10.244.2.2:53088 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000076001s
	
	
	==> describe nodes <==
	Name:               ha-061400
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-061400
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd2f4c3eba2bd452b5997c855e28d0966165ba83
	                    minikube.k8s.io/name=ha-061400
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_08T23_49_12_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 08 Apr 2025 23:49:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-061400
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 08 Apr 2025 23:59:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 08 Apr 2025 23:58:21 +0000   Tue, 08 Apr 2025 23:49:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 08 Apr 2025 23:58:21 +0000   Tue, 08 Apr 2025 23:49:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 08 Apr 2025 23:58:21 +0000   Tue, 08 Apr 2025 23:49:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 08 Apr 2025 23:58:21 +0000   Tue, 08 Apr 2025 23:49:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.119.206
	  Hostname:    ha-061400
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 b3d330f5715e45408e02849423800390
	  System UUID:                3aad7807-a96f-3942-abc1-aa927c98bb39
	  Boot ID:                    9ecd26fe-65e5-41d9-ac46-3435dfdf7d65
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-8xfwm             0 (0%)        0 (0%)      0 (0%)           0 (0%)         69s
	  kube-system                 coredns-668d6bf9bc-rzk8c             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m50s
	  kube-system                 coredns-668d6bf9bc-scvcr             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m50s
	  kube-system                 etcd-ha-061400                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m55s
	  kube-system                 kindnet-44mc6                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m51s
	  kube-system                 kube-apiserver-ha-061400             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m55s
	  kube-system                 kube-controller-manager-ha-061400    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m55s
	  kube-system                 kube-proxy-lr9jb                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m51s
	  kube-system                 kube-scheduler-ha-061400             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m55s
	  kube-system                 kube-vip-ha-061400                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m55s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m47s  kube-proxy       
	  Normal  Starting                 9m54s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m54s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m54s  kubelet          Node ha-061400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m54s  kubelet          Node ha-061400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m54s  kubelet          Node ha-061400 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m51s  node-controller  Node ha-061400 event: Registered Node ha-061400 in Controller
	  Normal  NodeReady                9m26s  kubelet          Node ha-061400 status is now: NodeReady
	  Normal  RegisteredNode           6m6s   node-controller  Node ha-061400 event: Registered Node ha-061400 in Controller
	  Normal  RegisteredNode           2m7s   node-controller  Node ha-061400 event: Registered Node ha-061400 in Controller
	
	
	Name:               ha-061400-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-061400-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd2f4c3eba2bd452b5997c855e28d0966165ba83
	                    minikube.k8s.io/name=ha-061400
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_04_08T23_52_53_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 08 Apr 2025 23:52:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-061400-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 08 Apr 2025 23:58:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 08 Apr 2025 23:53:18 +0000   Tue, 08 Apr 2025 23:52:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 08 Apr 2025 23:53:18 +0000   Tue, 08 Apr 2025 23:52:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 08 Apr 2025 23:53:18 +0000   Tue, 08 Apr 2025 23:52:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 08 Apr 2025 23:53:18 +0000   Tue, 08 Apr 2025 23:53:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.118.215
	  Hostname:    ha-061400-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 af8c08d5c0dd43b89d14f9b41ee99f4d
	  System UUID:                dfbd2a65-43f9-ef48-83c1-f4a679e65267
	  Boot ID:                    2ff197ae-e86f-40f6-a2e5-ef6f3f5aea9a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-061400-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m17s
	  kube-system                 kindnet-7mvqz                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m18s
	  kube-system                 kube-apiserver-ha-061400-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-controller-manager-ha-061400-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-proxy-nkwqr                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 kube-scheduler-ha-061400-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-vip-ha-061400-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m11s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m18s (x8 over 6m18s)  kubelet          Node ha-061400-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m18s (x8 over 6m18s)  kubelet          Node ha-061400-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m18s (x7 over 6m18s)  kubelet          Node ha-061400-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m16s                  node-controller  Node ha-061400-m02 event: Registered Node ha-061400-m02 in Controller
	  Normal  RegisteredNode           6m6s                   node-controller  Node ha-061400-m02 event: Registered Node ha-061400-m02 in Controller
	  Normal  RegisteredNode           2m7s                   node-controller  Node ha-061400-m02 event: Registered Node ha-061400-m02 in Controller
	
	
	Name:               ha-061400-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-061400-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd2f4c3eba2bd452b5997c855e28d0966165ba83
	                    minikube.k8s.io/name=ha-061400
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_04_08T23_56_53_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 08 Apr 2025 23:56:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-061400-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 08 Apr 2025 23:58:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 08 Apr 2025 23:58:17 +0000   Tue, 08 Apr 2025 23:56:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 08 Apr 2025 23:58:17 +0000   Tue, 08 Apr 2025 23:56:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 08 Apr 2025 23:58:17 +0000   Tue, 08 Apr 2025 23:56:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 08 Apr 2025 23:58:17 +0000   Tue, 08 Apr 2025 23:57:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.126.102
	  Hostname:    ha-061400-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 9278cc1050a0476f97f6d184e6bf83da
	  System UUID:                b21adb76-59a6-864d-b150-09cc92d14a3f
	  Boot ID:                    3dcbf462-4d94-4440-bbfb-e532cb8d8109
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-rjkqv                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         69s
	  default                     busybox-58667487b6-rxp4w                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         69s
	  kube-system                 etcd-ha-061400-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         2m18s
	  kube-system                 kindnet-d8bcw                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      2m20s
	  kube-system                 kube-apiserver-ha-061400-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-controller-manager-ha-061400-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-proxy-rl7bv                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-scheduler-ha-061400-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-vip-ha-061400-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m13s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m20s (x8 over 2m20s)  kubelet          Node ha-061400-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m20s (x8 over 2m20s)  kubelet          Node ha-061400-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m20s (x7 over 2m20s)  kubelet          Node ha-061400-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m16s                  node-controller  Node ha-061400-m03 event: Registered Node ha-061400-m03 in Controller
	  Normal  RegisteredNode           2m16s                  node-controller  Node ha-061400-m03 event: Registered Node ha-061400-m03 in Controller
	  Normal  RegisteredNode           2m7s                   node-controller  Node ha-061400-m03 event: Registered Node ha-061400-m03 in Controller
	
	
	==> dmesg <==
	[  +7.348956] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr 8 23:48] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.161005] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[ +30.446502] systemd-fstab-generator[1007]: Ignoring "noauto" option for root device
	[  +0.106580] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.536755] systemd-fstab-generator[1046]: Ignoring "noauto" option for root device
	[  +0.218326] systemd-fstab-generator[1058]: Ignoring "noauto" option for root device
	[  +0.228069] systemd-fstab-generator[1072]: Ignoring "noauto" option for root device
	[  +2.927372] systemd-fstab-generator[1294]: Ignoring "noauto" option for root device
	[  +0.218960] systemd-fstab-generator[1306]: Ignoring "noauto" option for root device
	[  +0.208393] systemd-fstab-generator[1318]: Ignoring "noauto" option for root device
	[  +0.272770] systemd-fstab-generator[1333]: Ignoring "noauto" option for root device
	[ +11.337963] systemd-fstab-generator[1436]: Ignoring "noauto" option for root device
	[  +0.128861] kauditd_printk_skb: 206 callbacks suppressed
	[  +3.599899] systemd-fstab-generator[1703]: Ignoring "noauto" option for root device
	[  +6.562171] systemd-fstab-generator[1856]: Ignoring "noauto" option for root device
	[  +0.102752] kauditd_printk_skb: 74 callbacks suppressed
	[Apr 8 23:49] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.364803] systemd-fstab-generator[2382]: Ignoring "noauto" option for root device
	[  +6.262488] kauditd_printk_skb: 17 callbacks suppressed
	[  +7.472028] kauditd_printk_skb: 29 callbacks suppressed
	[Apr 8 23:52] hrtimer: interrupt took 3515625 ns
	[ +53.513520] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [73e54c2230f8] <==
	{"level":"info","ts":"2025-04-08T23:56:49.066789Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"9ab5492bf637f55c","remote-peer-id":"c2e267c0ac101f55"}
	{"level":"info","ts":"2025-04-08T23:56:49.066719Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"9ab5492bf637f55c","to":"c2e267c0ac101f55","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-04-08T23:56:49.066963Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"9ab5492bf637f55c","remote-peer-id":"c2e267c0ac101f55"}
	{"level":"warn","ts":"2025-04-08T23:56:49.104187Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"c2e267c0ac101f55","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2025-04-08T23:56:49.136110Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"9ab5492bf637f55c","remote-peer-id":"c2e267c0ac101f55"}
	{"level":"info","ts":"2025-04-08T23:56:49.144530Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"9ab5492bf637f55c","remote-peer-id":"c2e267c0ac101f55"}
	{"level":"warn","ts":"2025-04-08T23:56:50.103699Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"c2e267c0ac101f55","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2025-04-08T23:56:50.594122Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.881511ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-08T23:56:50.594210Z","caller":"traceutil/trace.go:171","msg":"trace[2069935396] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1498; }","duration":"183.011712ms","start":"2025-04-08T23:56:50.411180Z","end":"2025-04-08T23:56:50.594192Z","steps":["trace[2069935396] 'agreement among raft nodes before linearized reading'  (duration: 95.043829ms)","trace[2069935396] 'range keys from in-memory index tree'  (duration: 87.829482ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-08T23:56:50.595034Z","caller":"traceutil/trace.go:171","msg":"trace[1246071653] transaction","detail":"{read_only:false; response_revision:1499; number_of_response:1; }","duration":"199.33712ms","start":"2025-04-08T23:56:50.395685Z","end":"2025-04-08T23:56:50.595022Z","steps":["trace[1246071653] 'process raft request'  (duration: 110.629133ms)","trace[1246071653] 'compare'  (duration: 87.57158ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-08T23:56:50.772698Z","caller":"traceutil/trace.go:171","msg":"trace[1300955176] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1500; }","duration":"173.800051ms","start":"2025-04-08T23:56:50.598877Z","end":"2025-04-08T23:56:50.772677Z","steps":["trace[1300955176] 'process raft request'  (duration: 131.45917ms)","trace[1300955176] 'compare'  (duration: 42.25488ms)"],"step_count":2}
	{"level":"warn","ts":"2025-04-08T23:56:51.103090Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"c2e267c0ac101f55","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2025-04-08T23:56:52.117532Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ab5492bf637f55c switched to configuration voters=(8056825265922428990 11147896905788814684 14042900665312747349)"}
	{"level":"info","ts":"2025-04-08T23:56:52.117745Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"3cae724c8734d9a5","local-member-id":"9ab5492bf637f55c"}
	{"level":"info","ts":"2025-04-08T23:56:52.117778Z","caller":"etcdserver/server.go:2018","msg":"applied a configuration change through raft","local-member-id":"9ab5492bf637f55c","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"c2e267c0ac101f55"}
	{"level":"warn","ts":"2025-04-08T23:57:00.590879Z","caller":"etcdserver/raft.go:426","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"c2e267c0ac101f55","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"46.394428ms"}
	{"level":"warn","ts":"2025-04-08T23:57:00.590997Z","caller":"etcdserver/raft.go:426","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"6fcf97e478b2d03e","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"46.517529ms"}
	{"level":"info","ts":"2025-04-08T23:57:00.596520Z","caller":"traceutil/trace.go:171","msg":"trace[990690030] linearizableReadLoop","detail":"{readStateIndex:1739; appliedIndex:1739; }","duration":"146.552968ms","start":"2025-04-08T23:57:00.449953Z","end":"2025-04-08T23:57:00.596506Z","steps":["trace[990690030] 'read index received'  (duration: 146.547668ms)","trace[990690030] 'applied index is now lower than readState.Index'  (duration: 3.8µs)"],"step_count":2}
	{"level":"warn","ts":"2025-04-08T23:57:00.596956Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.984872ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-061400-m03\" limit:1 ","response":"range_response_count:1 size:4377"}
	{"level":"info","ts":"2025-04-08T23:57:00.597170Z","caller":"traceutil/trace.go:171","msg":"trace[1846146720] range","detail":"{range_begin:/registry/minions/ha-061400-m03; range_end:; response_count:1; response_revision:1548; }","duration":"147.239073ms","start":"2025-04-08T23:57:00.449854Z","end":"2025-04-08T23:57:00.597093Z","steps":["trace[1846146720] 'agreement among raft nodes before linearized reading'  (duration: 146.862271ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-08T23:57:00.597092Z","caller":"traceutil/trace.go:171","msg":"trace[518767830] transaction","detail":"{read_only:false; response_revision:1549; number_of_response:1; }","duration":"226.823099ms","start":"2025-04-08T23:57:00.370256Z","end":"2025-04-08T23:57:00.597079Z","steps":["trace[518767830] 'process raft request'  (duration: 226.493797ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-08T23:57:00.769721Z","caller":"traceutil/trace.go:171","msg":"trace[708368101] transaction","detail":"{read_only:false; response_revision:1550; number_of_response:1; }","duration":"150.636596ms","start":"2025-04-08T23:57:00.619068Z","end":"2025-04-08T23:57:00.769704Z","steps":["trace[708368101] 'process raft request'  (duration: 90.110196ms)","trace[708368101] 'compare'  (duration: 60.449199ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-08T23:59:04.104329Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1051}
	{"level":"info","ts":"2025-04-08T23:59:04.274054Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1051,"took":"164.99122ms","hash":1871053304,"current-db-size-bytes":3653632,"current-db-size":"3.7 MB","current-db-size-in-use-bytes":2117632,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2025-04-08T23:59:04.274173Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":1871053304,"revision":1051,"compact-revision":-1}
	
	
	==> kernel <==
	 23:59:05 up 12 min,  0 users,  load average: 0.84, 0.55, 0.37
	Linux ha-061400 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [f72554e17373] <==
	I0408 23:58:15.636834       1 main.go:324] Node ha-061400-m03 has CIDR [10.244.2.0/24] 
	I0408 23:58:25.626364       1 main.go:297] Handling node with IPs: map[192.168.119.206:{}]
	I0408 23:58:25.626551       1 main.go:301] handling current node
	I0408 23:58:25.626602       1 main.go:297] Handling node with IPs: map[192.168.118.215:{}]
	I0408 23:58:25.626657       1 main.go:324] Node ha-061400-m02 has CIDR [10.244.1.0/24] 
	I0408 23:58:25.627346       1 main.go:297] Handling node with IPs: map[192.168.126.102:{}]
	I0408 23:58:25.627377       1 main.go:324] Node ha-061400-m03 has CIDR [10.244.2.0/24] 
	I0408 23:58:35.627767       1 main.go:297] Handling node with IPs: map[192.168.119.206:{}]
	I0408 23:58:35.627949       1 main.go:301] handling current node
	I0408 23:58:35.628096       1 main.go:297] Handling node with IPs: map[192.168.118.215:{}]
	I0408 23:58:35.628287       1 main.go:324] Node ha-061400-m02 has CIDR [10.244.1.0/24] 
	I0408 23:58:35.628712       1 main.go:297] Handling node with IPs: map[192.168.126.102:{}]
	I0408 23:58:35.628788       1 main.go:324] Node ha-061400-m03 has CIDR [10.244.2.0/24] 
	I0408 23:58:45.633809       1 main.go:297] Handling node with IPs: map[192.168.119.206:{}]
	I0408 23:58:45.633933       1 main.go:301] handling current node
	I0408 23:58:45.633955       1 main.go:297] Handling node with IPs: map[192.168.118.215:{}]
	I0408 23:58:45.633963       1 main.go:324] Node ha-061400-m02 has CIDR [10.244.1.0/24] 
	I0408 23:58:45.634480       1 main.go:297] Handling node with IPs: map[192.168.126.102:{}]
	I0408 23:58:45.634500       1 main.go:324] Node ha-061400-m03 has CIDR [10.244.2.0/24] 
	I0408 23:58:55.632510       1 main.go:297] Handling node with IPs: map[192.168.119.206:{}]
	I0408 23:58:55.632644       1 main.go:301] handling current node
	I0408 23:58:55.632671       1 main.go:297] Handling node with IPs: map[192.168.118.215:{}]
	I0408 23:58:55.632685       1 main.go:324] Node ha-061400-m02 has CIDR [10.244.1.0/24] 
	I0408 23:58:55.633145       1 main.go:297] Handling node with IPs: map[192.168.126.102:{}]
	I0408 23:58:55.633282       1 main.go:324] Node ha-061400-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [f7ba71d60c8f] <==
	I0408 23:49:11.141029       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0408 23:49:11.177765       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0408 23:49:11.220917       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0408 23:49:14.757769       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0408 23:49:14.871150       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0408 23:56:46.575240       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0408 23:56:46.575358       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 14.4µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0408 23:56:46.576463       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0408 23:56:46.577604       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0408 23:56:46.578958       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="20.917138ms" method="PATCH" path="/api/v1/namespaces/default/events/ha-061400-m03.18347d3120e7d3d8" result=null
	E0408 23:58:04.947664       1 conn.go:339] Error on socket receive: read tcp 192.168.127.254:8443->192.168.112.1:54806: use of closed network connection
	E0408 23:58:05.527144       1 conn.go:339] Error on socket receive: read tcp 192.168.127.254:8443->192.168.112.1:54810: use of closed network connection
	E0408 23:58:07.371787       1 conn.go:339] Error on socket receive: read tcp 192.168.127.254:8443->192.168.112.1:54812: use of closed network connection
	E0408 23:58:07.941292       1 conn.go:339] Error on socket receive: read tcp 192.168.127.254:8443->192.168.112.1:54814: use of closed network connection
	E0408 23:58:08.521650       1 conn.go:339] Error on socket receive: read tcp 192.168.127.254:8443->192.168.112.1:54816: use of closed network connection
	E0408 23:58:09.055570       1 conn.go:339] Error on socket receive: read tcp 192.168.127.254:8443->192.168.112.1:54818: use of closed network connection
	E0408 23:58:09.541335       1 conn.go:339] Error on socket receive: read tcp 192.168.127.254:8443->192.168.112.1:54820: use of closed network connection
	E0408 23:58:10.039595       1 conn.go:339] Error on socket receive: read tcp 192.168.127.254:8443->192.168.112.1:54822: use of closed network connection
	E0408 23:58:10.529783       1 conn.go:339] Error on socket receive: read tcp 192.168.127.254:8443->192.168.112.1:54824: use of closed network connection
	E0408 23:58:11.478313       1 conn.go:339] Error on socket receive: read tcp 192.168.127.254:8443->192.168.112.1:54827: use of closed network connection
	E0408 23:58:22.003095       1 conn.go:339] Error on socket receive: read tcp 192.168.127.254:8443->192.168.112.1:54830: use of closed network connection
	E0408 23:58:22.524565       1 conn.go:339] Error on socket receive: read tcp 192.168.127.254:8443->192.168.112.1:54833: use of closed network connection
	E0408 23:58:33.075009       1 conn.go:339] Error on socket receive: read tcp 192.168.127.254:8443->192.168.112.1:54835: use of closed network connection
	E0408 23:58:33.566498       1 conn.go:339] Error on socket receive: read tcp 192.168.127.254:8443->192.168.112.1:54838: use of closed network connection
	E0408 23:58:44.109837       1 conn.go:339] Error on socket receive: read tcp 192.168.127.254:8443->192.168.112.1:54840: use of closed network connection
	
	
	==> kube-controller-manager [cd88701b3604] <==
	I0408 23:56:53.280945       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-061400-m03"
	I0408 23:56:53.474869       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-061400-m03"
	I0408 23:56:55.884328       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-061400-m03"
	I0408 23:56:58.176137       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-061400-m03"
	I0408 23:56:58.242876       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-061400-m03"
	I0408 23:57:12.851786       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-061400-m03"
	I0408 23:57:12.896905       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-061400-m03"
	I0408 23:57:13.220722       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-061400-m03"
	I0408 23:57:16.314474       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-061400-m03"
	I0408 23:57:57.106566       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="195.545479ms"
	I0408 23:57:57.253698       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="146.83076ms"
	I0408 23:57:57.620022       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="363.668777ms"
	I0408 23:57:57.697880       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="77.315405ms"
	I0408 23:57:57.698009       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="61.2µs"
	I0408 23:57:58.633534       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="143.001µs"
	I0408 23:57:58.652761       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="77.4µs"
	I0408 23:57:58.659379       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="44.4µs"
	I0408 23:58:00.790360       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="108.147062ms"
	I0408 23:58:00.791385       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="71.001µs"
	I0408 23:58:00.904553       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="53.853928ms"
	I0408 23:58:00.905143       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="419.604µs"
	I0408 23:58:01.847078       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="17.005298ms"
	I0408 23:58:01.848608       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="64.401µs"
	I0408 23:58:17.734683       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-061400-m03"
	I0408 23:58:21.301215       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-061400"
	
	
	==> kube-proxy [231ada308844] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0408 23:49:17.949819       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0408 23:49:18.021892       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.119.206"]
	E0408 23:49:18.026305       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0408 23:49:18.099252       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0408 23:49:18.099424       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0408 23:49:18.099462       1 server_linux.go:170] "Using iptables Proxier"
	I0408 23:49:18.105499       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0408 23:49:18.107446       1 server.go:497] "Version info" version="v1.32.2"
	I0408 23:49:18.107621       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0408 23:49:18.114618       1 config.go:199] "Starting service config controller"
	I0408 23:49:18.115991       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0408 23:49:18.116218       1 config.go:329] "Starting node config controller"
	I0408 23:49:18.116303       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0408 23:49:18.120167       1 config.go:105] "Starting endpoint slice config controller"
	I0408 23:49:18.120207       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0408 23:49:18.216569       1 shared_informer.go:320] Caches are synced for service config
	I0408 23:49:18.216693       1 shared_informer.go:320] Caches are synced for node config
	I0408 23:49:18.221130       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [327b3e42a6db] <==
	E0408 23:49:07.866222       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0408 23:49:07.942601       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0408 23:49:07.943068       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0408 23:49:07.943027       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0408 23:49:07.943185       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0408 23:49:07.988489       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0408 23:49:07.988591       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0408 23:49:07.990530       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0408 23:49:07.990801       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0408 23:49:08.011030       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0408 23:49:08.011411       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0408 23:49:08.088084       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0408 23:49:08.088140       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0408 23:49:08.088198       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0408 23:49:08.088215       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0408 23:49:08.166036       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0408 23:49:08.166177       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0408 23:49:08.166378       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0408 23:49:08.166482       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0408 23:49:10.132153       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0408 23:57:57.037950       1 cache.go:504] "Pod was added to a different node than it was assumed" podKey="9db65570-aafe-4092-9a0e-365b7d2881f6" pod="default/busybox-58667487b6-rxp4w" assumedNode="ha-061400-m03" currentNode="ha-061400-m02"
	E0408 23:57:57.044544       1 framework.go:1316] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-58667487b6-rxp4w\": pod busybox-58667487b6-rxp4w is already assigned to node \"ha-061400-m03\"" plugin="DefaultBinder" pod="default/busybox-58667487b6-rxp4w" node="ha-061400-m02"
	E0408 23:57:57.050241       1 schedule_one.go:359] "scheduler cache ForgetPod failed" err="pod 9db65570-aafe-4092-9a0e-365b7d2881f6(default/busybox-58667487b6-rxp4w) was assumed on ha-061400-m02 but assigned to ha-061400-m03" pod="default/busybox-58667487b6-rxp4w"
	E0408 23:57:57.050450       1 schedule_one.go:1058] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-58667487b6-rxp4w\": pod busybox-58667487b6-rxp4w is already assigned to node \"ha-061400-m03\"" pod="default/busybox-58667487b6-rxp4w"
	I0408 23:57:57.050504       1 schedule_one.go:1071] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-58667487b6-rxp4w" node="ha-061400-m03"
	
	
	==> kubelet <==
	Apr 08 23:54:11 ha-061400 kubelet[2389]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 08 23:54:11 ha-061400 kubelet[2389]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 08 23:54:11 ha-061400 kubelet[2389]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 08 23:55:11 ha-061400 kubelet[2389]: E0408 23:55:11.329615    2389 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 08 23:55:11 ha-061400 kubelet[2389]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 08 23:55:11 ha-061400 kubelet[2389]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 08 23:55:11 ha-061400 kubelet[2389]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 08 23:55:11 ha-061400 kubelet[2389]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 08 23:56:11 ha-061400 kubelet[2389]: E0408 23:56:11.333919    2389 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 08 23:56:11 ha-061400 kubelet[2389]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 08 23:56:11 ha-061400 kubelet[2389]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 08 23:56:11 ha-061400 kubelet[2389]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 08 23:56:11 ha-061400 kubelet[2389]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 08 23:57:11 ha-061400 kubelet[2389]: E0408 23:57:11.333361    2389 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 08 23:57:11 ha-061400 kubelet[2389]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 08 23:57:11 ha-061400 kubelet[2389]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 08 23:57:11 ha-061400 kubelet[2389]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 08 23:57:11 ha-061400 kubelet[2389]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 08 23:57:57 ha-061400 kubelet[2389]: I0408 23:57:57.127364    2389 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-scvcr" podStartSLOduration=522.123523396 podStartE2EDuration="8m42.123523396s" podCreationTimestamp="2025-04-08 23:49:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-08 23:49:41.725895319 +0000 UTC m=+30.722682004" watchObservedRunningTime="2025-04-08 23:57:57.123523396 +0000 UTC m=+526.120310081"
	Apr 08 23:57:57 ha-061400 kubelet[2389]: I0408 23:57:57.264680    2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgdvf\" (UniqueName: \"kubernetes.io/projected/4fe81839-0904-4769-804f-bc016ed888e7-kube-api-access-dgdvf\") pod \"busybox-58667487b6-8xfwm\" (UID: \"4fe81839-0904-4769-804f-bc016ed888e7\") " pod="default/busybox-58667487b6-8xfwm"
	Apr 08 23:58:11 ha-061400 kubelet[2389]: E0408 23:58:11.322805    2389 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 08 23:58:11 ha-061400 kubelet[2389]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 08 23:58:11 ha-061400 kubelet[2389]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 08 23:58:11 ha-061400 kubelet[2389]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 08 23:58:11 ha-061400 kubelet[2389]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-061400 -n ha-061400
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-061400 -n ha-061400: (12.5174194s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-061400 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (69.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (84.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 node stop m02 -v=7 --alsologtostderr: (34.9714063s)
ha_test.go:371: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-061400 status -v=7 --alsologtostderr: exit status 1 (14.2976004s)

                                                
                                                
** stderr ** 
	I0409 00:15:51.461124    4496 out.go:345] Setting OutFile to fd 1812 ...
	I0409 00:15:51.547616    4496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0409 00:15:51.547616    4496 out.go:358] Setting ErrFile to fd 1816...
	I0409 00:15:51.547686    4496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0409 00:15:51.562887    4496 out.go:352] Setting JSON to false
	I0409 00:15:51.562887    4496 mustload.go:65] Loading cluster: ha-061400
	I0409 00:15:51.562887    4496 notify.go:220] Checking for updates...
	I0409 00:15:51.564679    4496 config.go:182] Loaded profile config "ha-061400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0409 00:15:51.564679    4496 status.go:174] checking status of ha-061400 ...
	I0409 00:15:51.565232    4496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0409 00:15:53.857908    4496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:15:53.857937    4496 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:15:53.857937    4496 status.go:371] ha-061400 host status = "Running" (err=<nil>)
	I0409 00:15:53.857937    4496 host.go:66] Checking if "ha-061400" exists ...
	I0409 00:15:53.858461    4496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0409 00:15:56.091155    4496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:15:56.091155    4496 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:15:56.091155    4496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0409 00:15:58.747220    4496 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0409 00:15:58.747220    4496 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:15:58.747220    4496 host.go:66] Checking if "ha-061400" exists ...
	I0409 00:15:58.762932    4496 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0409 00:15:58.762932    4496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0409 00:16:00.968185    4496 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:16:00.968185    4496 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:16:00.968651    4496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0409 00:16:03.688620    4496 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0409 00:16:03.688620    4496 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:16:03.689635    4496 sshutil.go:53] new ssh client: &{IP:192.168.119.206 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400\id_rsa Username:docker}
	I0409 00:16:03.787347    4496 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.0243491s)
	I0409 00:16:03.799531    4496 ssh_runner.go:195] Run: systemctl --version
	I0409 00:16:03.821887    4496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0409 00:16:03.848130    4496 kubeconfig.go:125] found "ha-061400" server: "https://192.168.127.254:8443"
	I0409 00:16:03.848130    4496 api_server.go:166] Checking apiserver status ...
	I0409 00:16:03.861457    4496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0409 00:16:03.901240    4496 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2151/cgroup
	W0409 00:16:03.921586    4496 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2151/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0409 00:16:03.932924    4496 ssh_runner.go:195] Run: ls
	I0409 00:16:03.940887    4496 api_server.go:253] Checking apiserver healthz at https://192.168.127.254:8443/healthz ...
	I0409 00:16:03.949675    4496 api_server.go:279] https://192.168.127.254:8443/healthz returned 200:
	ok
	I0409 00:16:03.949675    4496 status.go:463] ha-061400 apiserver status = Running (err=<nil>)
	I0409 00:16:03.949738    4496 status.go:176] ha-061400 status: &{Name:ha-061400 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0409 00:16:03.949738    4496 status.go:174] checking status of ha-061400-m02 ...
	I0409 00:16:03.950449    4496 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state

                                                
                                                
** /stderr **
ha_test.go:374: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-061400 status -v=7 --alsologtostderr" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-061400 -n ha-061400
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-061400 -n ha-061400: (12.3698119s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 logs -n 25: (8.9156323s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                            |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| cp      | ha-061400 cp ha-061400-m03:/home/docker/cp-test.txt                                                                       | ha-061400 | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:10 UTC | 09 Apr 25 00:10 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2518069842\001\cp-test_ha-061400-m03.txt |           |                   |         |                     |                     |
	| ssh     | ha-061400 ssh -n                                                                                                          | ha-061400 | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:10 UTC | 09 Apr 25 00:10 UTC |
	|         | ha-061400-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-061400 cp ha-061400-m03:/home/docker/cp-test.txt                                                                       | ha-061400 | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:10 UTC | 09 Apr 25 00:11 UTC |
	|         | ha-061400:/home/docker/cp-test_ha-061400-m03_ha-061400.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-061400 ssh -n                                                                                                          | ha-061400 | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:11 UTC | 09 Apr 25 00:11 UTC |
	|         | ha-061400-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-061400 ssh -n ha-061400 sudo cat                                                                                       | ha-061400 | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:11 UTC | 09 Apr 25 00:11 UTC |
	|         | /home/docker/cp-test_ha-061400-m03_ha-061400.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-061400 cp ha-061400-m03:/home/docker/cp-test.txt                                                                       | ha-061400 | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:11 UTC | 09 Apr 25 00:11 UTC |
	|         | ha-061400-m02:/home/docker/cp-test_ha-061400-m03_ha-061400-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-061400 ssh -n                                                                                                          | ha-061400 | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:11 UTC | 09 Apr 25 00:12 UTC |
	|         | ha-061400-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-061400 ssh -n ha-061400-m02 sudo cat                                                                                   | ha-061400 | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:12 UTC | 09 Apr 25 00:12 UTC |
	|         | /home/docker/cp-test_ha-061400-m03_ha-061400-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-061400 cp ha-061400-m03:/home/docker/cp-test.txt                                                                       | ha-061400 | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:12 UTC | 09 Apr 25 00:12 UTC |
	|         | ha-061400-m04:/home/docker/cp-test_ha-061400-m03_ha-061400-m04.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-061400 ssh -n                                                                                                          | ha-061400 | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:12 UTC | 09 Apr 25 00:12 UTC |
	|         | ha-061400-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-061400 ssh -n ha-061400-m04 sudo cat                                                                                   | ha-061400 | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:12 UTC | 09 Apr 25 00:12 UTC |
	|         | /home/docker/cp-test_ha-061400-m03_ha-061400-m04.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-061400 cp testdata\cp-test.txt                                                                                         | ha-061400 | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:12 UTC | 09 Apr 25 00:12 UTC |
	|         | ha-061400-m04:/home/docker/cp-test.txt                                                                                    |           |                   |         |                     |                     |
	| ssh     | ha-061400 ssh -n                                                                                                          | ha-061400 | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:12 UTC | 09 Apr 25 00:13 UTC |
	|         | ha-061400-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-061400 cp ha-061400-m04:/home/docker/cp-test.txt                                                                       | ha-061400 | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:13 UTC | 09 Apr 25 00:13 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2518069842\001\cp-test_ha-061400-m04.txt |           |                   |         |                     |                     |
	| ssh     | ha-061400 ssh -n                                                                                                          | ha-061400 | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:13 UTC | 09 Apr 25 00:13 UTC |
	|         | ha-061400-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-061400 cp ha-061400-m04:/home/docker/cp-test.txt                                                                       | ha-061400 | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:13 UTC | 09 Apr 25 00:13 UTC |
	|         | ha-061400:/home/docker/cp-test_ha-061400-m04_ha-061400.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-061400 ssh -n                                                                                                          | ha-061400 | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:13 UTC | 09 Apr 25 00:13 UTC |
	|         | ha-061400-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-061400 ssh -n ha-061400 sudo cat                                                                                       | ha-061400 | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:13 UTC | 09 Apr 25 00:14 UTC |
	|         | /home/docker/cp-test_ha-061400-m04_ha-061400.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-061400 cp ha-061400-m04:/home/docker/cp-test.txt                                                                       | ha-061400 | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:14 UTC | 09 Apr 25 00:14 UTC |
	|         | ha-061400-m02:/home/docker/cp-test_ha-061400-m04_ha-061400-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-061400 ssh -n                                                                                                          | ha-061400 | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:14 UTC | 09 Apr 25 00:14 UTC |
	|         | ha-061400-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-061400 ssh -n ha-061400-m02 sudo cat                                                                                   | ha-061400 | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:14 UTC | 09 Apr 25 00:14 UTC |
	|         | /home/docker/cp-test_ha-061400-m04_ha-061400-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-061400 cp ha-061400-m04:/home/docker/cp-test.txt                                                                       | ha-061400 | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:14 UTC | 09 Apr 25 00:14 UTC |
	|         | ha-061400-m03:/home/docker/cp-test_ha-061400-m04_ha-061400-m03.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-061400 ssh -n                                                                                                          | ha-061400 | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:14 UTC | 09 Apr 25 00:15 UTC |
	|         | ha-061400-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-061400 ssh -n ha-061400-m03 sudo cat                                                                                   | ha-061400 | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:15 UTC | 09 Apr 25 00:15 UTC |
	|         | /home/docker/cp-test_ha-061400-m04_ha-061400-m03.txt                                                                      |           |                   |         |                     |                     |
	| node    | ha-061400 node stop m02 -v=7                                                                                              | ha-061400 | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:15 UTC | 09 Apr 25 00:15 UTC |
	|         | --alsologtostderr                                                                                                         |           |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/08 23:46:05
	Running on machine: minikube6
	Binary: Built with gc go1.24.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 23:46:05.713268    7680 out.go:345] Setting OutFile to fd 1072 ...
	I0408 23:46:05.782891    7680 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 23:46:05.782891    7680 out.go:358] Setting ErrFile to fd 1268...
	I0408 23:46:05.782891    7680 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 23:46:05.804615    7680 out.go:352] Setting JSON to false
	I0408 23:46:05.807921    7680 start.go:129] hostinfo: {"hostname":"minikube6","uptime":12963,"bootTime":1744143002,"procs":175,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5679 Build 19045.5679","kernelVersion":"10.0.19045.5679 Build 19045.5679","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0408 23:46:05.807921    7680 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 23:46:05.812960    7680 out.go:177] * [ha-061400] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
	I0408 23:46:05.817953    7680 notify.go:220] Checking for updates...
	I0408 23:46:05.817994    7680 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0408 23:46:05.821887    7680 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0408 23:46:05.824808    7680 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0408 23:46:05.827473    7680 out.go:177]   - MINIKUBE_LOCATION=20501
	I0408 23:46:05.829945    7680 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0408 23:46:05.834193    7680 driver.go:404] Setting default libvirt URI to qemu:///system
	I0408 23:46:11.014871    7680 out.go:177] * Using the hyperv driver based on user configuration
	I0408 23:46:11.018348    7680 start.go:297] selected driver: hyperv
	I0408 23:46:11.018348    7680 start.go:901] validating driver "hyperv" against <nil>
	I0408 23:46:11.018348    7680 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0408 23:46:11.072670    7680 start_flags.go:311] no existing cluster config was found, will generate one from the flags 
	I0408 23:46:11.073693    7680 start_flags.go:975] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 23:46:11.073693    7680 cni.go:84] Creating CNI manager for ""
	I0408 23:46:11.073693    7680 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0408 23:46:11.073693    7680 start_flags.go:320] Found "CNI" CNI - setting NetworkPlugin=cni
	I0408 23:46:11.074783    7680 start.go:340] cluster config:
	{Name:ha-061400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-061400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 23:46:11.074860    7680 iso.go:125] acquiring lock: {Name:mk49322cc4182124f5e9cd1631076166a7ff463d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 23:46:11.079661    7680 out.go:177] * Starting "ha-061400" primary control-plane node in "ha-061400" cluster
	I0408 23:46:11.083186    7680 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0408 23:46:11.083327    7680 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
	I0408 23:46:11.083327    7680 cache.go:56] Caching tarball of preloaded images
	I0408 23:46:11.083327    7680 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0408 23:46:11.083989    7680 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0408 23:46:11.083989    7680 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\config.json ...
	I0408 23:46:11.084793    7680 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\config.json: {Name:mk1cc615eb76a4f9e67628aefb51723da50e1159 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 23:46:11.085897    7680 start.go:360] acquireMachinesLock for ha-061400: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 23:46:11.085897    7680 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-061400"
	I0408 23:46:11.086566    7680 start.go:93] Provisioning new machine with config: &{Name:ha-061400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName
:ha-061400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 23:46:11.086566    7680 start.go:125] createHost starting for "" (driver="hyperv")
	I0408 23:46:11.090881    7680 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 23:46:11.091835    7680 start.go:159] libmachine.API.Create for "ha-061400" (driver="hyperv")
	I0408 23:46:11.091835    7680 client.go:168] LocalClient.Create starting
	I0408 23:46:11.091835    7680 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0408 23:46:11.091835    7680 main.go:141] libmachine: Decoding PEM data...
	I0408 23:46:11.091835    7680 main.go:141] libmachine: Parsing certificate...
	I0408 23:46:11.091835    7680 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0408 23:46:11.091835    7680 main.go:141] libmachine: Decoding PEM data...
	I0408 23:46:11.091835    7680 main.go:141] libmachine: Parsing certificate...
	I0408 23:46:11.093385    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0408 23:46:13.111509    7680 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0408 23:46:13.111509    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:46:13.111617    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0408 23:46:14.779011    7680 main.go:141] libmachine: [stdout =====>] : False
	
	I0408 23:46:14.779585    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:46:14.779585    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0408 23:46:16.204660    7680 main.go:141] libmachine: [stdout =====>] : True
	
	I0408 23:46:16.204660    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:46:16.204660    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0408 23:46:19.720271    7680 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0408 23:46:19.720271    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:46:19.723242    7680 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0408 23:46:20.202897    7680 main.go:141] libmachine: Creating SSH key...
	I0408 23:46:20.609639    7680 main.go:141] libmachine: Creating VM...
	I0408 23:46:20.609639    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0408 23:46:23.422179    7680 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0408 23:46:23.422179    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:46:23.422936    7680 main.go:141] libmachine: Using switch "Default Switch"
	I0408 23:46:23.422995    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0408 23:46:25.096040    7680 main.go:141] libmachine: [stdout =====>] : True
	
	I0408 23:46:25.096189    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:46:25.096189    7680 main.go:141] libmachine: Creating VHD
	I0408 23:46:25.096189    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400\fixed.vhd' -SizeBytes 10MB -Fixed
	I0408 23:46:28.788861    7680 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 394A1494-325F-4CA9-A009-3434592A9134
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0408 23:46:28.788861    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:46:28.789029    7680 main.go:141] libmachine: Writing magic tar header
	I0408 23:46:28.789133    7680 main.go:141] libmachine: Writing SSH key tar header
	I0408 23:46:28.801281    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400\disk.vhd' -VHDType Dynamic -DeleteSource
	I0408 23:46:31.941136    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:46:31.941250    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:46:31.941337    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400\disk.vhd' -SizeBytes 20000MB
	I0408 23:46:34.497903    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:46:34.498610    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:46:34.498610    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-061400 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0408 23:46:38.061855    7680 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-061400 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0408 23:46:38.062960    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:46:38.063063    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-061400 -DynamicMemoryEnabled $false
	I0408 23:46:40.300145    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:46:40.300999    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:46:40.301101    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-061400 -Count 2
	I0408 23:46:42.532653    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:46:42.532653    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:46:42.533293    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-061400 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400\boot2docker.iso'
	I0408 23:46:45.113692    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:46:45.113762    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:46:45.113762    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-061400 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400\disk.vhd'
	I0408 23:46:47.702557    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:46:47.702557    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:46:47.703111    7680 main.go:141] libmachine: Starting VM...
	I0408 23:46:47.703149    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-061400
	I0408 23:46:50.748534    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:46:50.748868    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:46:50.748868    7680 main.go:141] libmachine: Waiting for host to start...
	I0408 23:46:50.748990    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:46:52.997942    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:46:52.998052    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:46:52.998052    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:46:55.504673    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:46:55.504673    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:46:56.505969    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:46:58.750675    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:46:58.750675    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:46:58.750675    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:47:01.261071    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:47:01.261489    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:02.261921    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:47:04.500047    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:47:04.500047    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:04.500047    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:47:06.994730    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:47:06.994730    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:07.994795    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:47:10.229017    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:47:10.229017    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:10.229924    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:47:12.806708    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:47:12.806766    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:13.807273    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:47:16.051095    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:47:16.052102    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:16.052102    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:47:18.567166    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:47:18.567166    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:18.567166    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:47:20.676535    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:47:20.676726    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:20.676726    7680 machine.go:93] provisionDockerMachine start ...
	I0408 23:47:20.676726    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:47:22.816637    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:47:22.817119    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:22.817119    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:47:25.281340    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:47:25.282178    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:25.288162    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:47:25.302648    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.119.206 22 <nil> <nil>}
	I0408 23:47:25.302715    7680 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 23:47:25.426721    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0408 23:47:25.426829    7680 buildroot.go:166] provisioning hostname "ha-061400"
	I0408 23:47:25.426829    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:47:27.518057    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:47:27.518057    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:27.518134    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:47:30.022921    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:47:30.022921    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:30.027478    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:47:30.028276    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.119.206 22 <nil> <nil>}
	I0408 23:47:30.028276    7680 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-061400 && echo "ha-061400" | sudo tee /etc/hostname
	I0408 23:47:30.193197    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-061400
	
	I0408 23:47:30.193197    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:47:32.280966    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:47:32.281290    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:32.281290    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:47:34.743525    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:47:34.743525    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:34.749367    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:47:34.750082    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.119.206 22 <nil> <nil>}
	I0408 23:47:34.750082    7680 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-061400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-061400/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-061400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 23:47:34.888358    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 23:47:34.888420    7680 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0408 23:47:34.888484    7680 buildroot.go:174] setting up certificates
	I0408 23:47:34.888576    7680 provision.go:84] configureAuth start
	I0408 23:47:34.888676    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:47:36.946744    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:47:36.947749    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:36.947787    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:47:39.460615    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:47:39.462061    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:39.462151    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:47:41.500916    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:47:41.500967    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:41.500967    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:47:43.966053    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:47:43.966053    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:43.966260    7680 provision.go:143] copyHostCerts
	I0408 23:47:43.966429    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0408 23:47:43.966657    7680 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0408 23:47:43.966751    7680 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0408 23:47:43.967202    7680 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0408 23:47:43.968669    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0408 23:47:43.968956    7680 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0408 23:47:43.969025    7680 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0408 23:47:43.969383    7680 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0408 23:47:43.970587    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0408 23:47:43.970844    7680 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0408 23:47:43.970949    7680 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0408 23:47:43.971370    7680 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0408 23:47:43.972256    7680 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-061400 san=[127.0.0.1 192.168.119.206 ha-061400 localhost minikube]
	I0408 23:47:44.157929    7680 provision.go:177] copyRemoteCerts
	I0408 23:47:44.169937    7680 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 23:47:44.169937    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:47:46.225885    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:47:46.225885    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:46.226514    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:47:48.729822    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:47:48.730848    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:48.731389    7680 sshutil.go:53] new ssh client: &{IP:192.168.119.206 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400\id_rsa Username:docker}
	I0408 23:47:48.848305    7680 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6783065s)
	I0408 23:47:48.848305    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0408 23:47:48.848678    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0408 23:47:48.894059    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0408 23:47:48.894086    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0408 23:47:48.935927    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0408 23:47:48.936311    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0408 23:47:48.976669    7680 provision.go:87] duration metric: took 14.0878196s to configureAuth
	I0408 23:47:48.976669    7680 buildroot.go:189] setting minikube options for container-runtime
	I0408 23:47:48.976925    7680 config.go:182] Loaded profile config "ha-061400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0408 23:47:48.976925    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:47:51.123956    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:47:51.124252    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:51.124252    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:47:53.652413    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:47:53.652413    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:53.658532    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:47:53.659297    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.119.206 22 <nil> <nil>}
	I0408 23:47:53.659297    7680 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0408 23:47:53.790134    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0408 23:47:53.790134    7680 buildroot.go:70] root file system type: tmpfs
	I0408 23:47:53.790362    7680 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0408 23:47:53.790440    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:47:55.862317    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:47:55.862405    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:55.862405    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:47:58.349515    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:47:58.350307    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:47:58.356398    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:47:58.357092    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.119.206 22 <nil> <nil>}
	I0408 23:47:58.357092    7680 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0408 23:47:58.522869    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0408 23:47:58.523419    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:48:00.661956    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:48:00.661956    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:48:00.663127    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:48:03.201659    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:48:03.201936    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:48:03.208215    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:48:03.208367    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.119.206 22 <nil> <nil>}
	I0408 23:48:03.208367    7680 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0408 23:48:05.435650    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0408 23:48:05.435650    7680 machine.go:96] duration metric: took 44.7583375s to provisionDockerMachine
	I0408 23:48:05.436221    7680 client.go:171] duration metric: took 1m54.3428816s to LocalClient.Create
	I0408 23:48:05.436271    7680 start.go:167] duration metric: took 1m54.3428816s to libmachine.API.Create "ha-061400"
	I0408 23:48:05.436345    7680 start.go:293] postStartSetup for "ha-061400" (driver="hyperv")
	I0408 23:48:05.436345    7680 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 23:48:05.447627    7680 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 23:48:05.447627    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:48:07.466827    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:48:07.466827    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:48:07.466827    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:48:09.952018    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:48:09.952018    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:48:09.952185    7680 sshutil.go:53] new ssh client: &{IP:192.168.119.206 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400\id_rsa Username:docker}
	I0408 23:48:10.060611    7680 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6128708s)
	I0408 23:48:10.072338    7680 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 23:48:10.078585    7680 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 23:48:10.078585    7680 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0408 23:48:10.079263    7680 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0408 23:48:10.080154    7680 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> 98642.pem in /etc/ssl/certs
	I0408 23:48:10.080225    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> /etc/ssl/certs/98642.pem
	I0408 23:48:10.090789    7680 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 23:48:10.111243    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem --> /etc/ssl/certs/98642.pem (1708 bytes)
	I0408 23:48:10.155509    7680 start.go:296] duration metric: took 4.7191017s for postStartSetup
	I0408 23:48:10.159178    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:48:12.218775    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:48:12.218775    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:48:12.219798    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:48:14.693154    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:48:14.693154    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:48:14.694420    7680 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\config.json ...
	I0408 23:48:14.698302    7680 start.go:128] duration metric: took 2m3.6101097s to createHost
	I0408 23:48:14.698603    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:48:16.721165    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:48:16.721165    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:48:16.721499    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:48:19.184134    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:48:19.184651    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:48:19.191131    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:48:19.191932    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.119.206 22 <nil> <nil>}
	I0408 23:48:19.191932    7680 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0408 23:48:19.322730    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744156099.349496772
	
	I0408 23:48:19.322819    7680 fix.go:216] guest clock: 1744156099.349496772
	I0408 23:48:19.322819    7680 fix.go:229] Guest: 2025-04-08 23:48:19.349496772 +0000 UTC Remote: 2025-04-08 23:48:14.6984524 +0000 UTC m=+129.066470901 (delta=4.651044372s)
	I0408 23:48:19.323027    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:48:21.398377    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:48:21.398377    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:48:21.399311    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:48:23.815997    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:48:23.815997    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:48:23.823228    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:48:23.823970    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.119.206 22 <nil> <nil>}
	I0408 23:48:23.823970    7680 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1744156099
	I0408 23:48:23.972884    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr  8 23:48:19 UTC 2025
	
	I0408 23:48:23.972884    7680 fix.go:236] clock set: Tue Apr  8 23:48:19 UTC 2025
	 (err=<nil>)
	I0408 23:48:23.972884    7680 start.go:83] releasing machines lock for "ha-061400", held for 2m12.8852393s
	I0408 23:48:23.972884    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:48:26.028915    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:48:26.028915    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:48:26.028915    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:48:28.465373    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:48:28.465373    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:48:28.469333    7680 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0408 23:48:28.469404    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:48:28.483440    7680 ssh_runner.go:195] Run: cat /version.json
	I0408 23:48:28.483440    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:48:30.708233    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:48:30.708233    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:48:30.708233    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:48:30.722675    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:48:30.723580    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:48:30.723580    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:48:33.317056    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:48:33.317634    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:48:33.317634    7680 sshutil.go:53] new ssh client: &{IP:192.168.119.206 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400\id_rsa Username:docker}
	I0408 23:48:33.343806    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:48:33.343806    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:48:33.343806    7680 sshutil.go:53] new ssh client: &{IP:192.168.119.206 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400\id_rsa Username:docker}
	I0408 23:48:33.418288    7680 ssh_runner.go:235] Completed: cat /version.json: (4.9347833s)
	I0408 23:48:33.431856    7680 ssh_runner.go:195] Run: systemctl --version
	I0408 23:48:33.437011    7680 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9676127s)
	W0408 23:48:33.437011    7680 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0408 23:48:33.454283    7680 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0408 23:48:33.462481    7680 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 23:48:33.472801    7680 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 23:48:33.503373    7680 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 23:48:33.503373    7680 start.go:495] detecting cgroup driver to use...
	I0408 23:48:33.503373    7680 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 23:48:33.550859    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	W0408 23:48:33.568525    7680 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0408 23:48:33.568601    7680 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0408 23:48:33.582205    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0408 23:48:33.601734    7680 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0408 23:48:33.612459    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0408 23:48:33.641890    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 23:48:33.673820    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0408 23:48:33.704040    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 23:48:33.732538    7680 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 23:48:33.763459    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0408 23:48:33.792444    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0408 23:48:33.823010    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0408 23:48:33.856879    7680 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 23:48:33.873481    7680 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 23:48:33.884201    7680 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 23:48:33.921136    7680 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 23:48:33.948819    7680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:48:34.159015    7680 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0408 23:48:34.188652    7680 start.go:495] detecting cgroup driver to use...
	I0408 23:48:34.201484    7680 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0408 23:48:34.237127    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 23:48:34.268443    7680 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 23:48:34.306555    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 23:48:34.341974    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0408 23:48:34.376665    7680 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0408 23:48:34.442336    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0408 23:48:34.464787    7680 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 23:48:34.513002    7680 ssh_runner.go:195] Run: which cri-dockerd
	I0408 23:48:34.529599    7680 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0408 23:48:34.552405    7680 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0408 23:48:34.607713    7680 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0408 23:48:34.826450    7680 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0408 23:48:34.999269    7680 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0408 23:48:34.999704    7680 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0408 23:48:35.041706    7680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:48:35.253852    7680 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0408 23:48:37.883559    7680 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6294976s)
	I0408 23:48:37.895865    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0408 23:48:37.930543    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0408 23:48:37.961693    7680 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0408 23:48:38.176435    7680 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0408 23:48:38.390290    7680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:48:38.592435    7680 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0408 23:48:38.633001    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0408 23:48:38.669266    7680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:48:38.875755    7680 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0408 23:48:39.000905    7680 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0408 23:48:39.012336    7680 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0408 23:48:39.021472    7680 start.go:563] Will wait 60s for crictl version
	I0408 23:48:39.033128    7680 ssh_runner.go:195] Run: which crictl
	I0408 23:48:39.050297    7680 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 23:48:39.102468    7680 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0408 23:48:39.112381    7680 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0408 23:48:39.154499    7680 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0408 23:48:39.191772    7680 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 27.4.0 ...
	I0408 23:48:39.191950    7680 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0408 23:48:39.196205    7680 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0408 23:48:39.196205    7680 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0408 23:48:39.196205    7680 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0408 23:48:39.196205    7680 ip.go:211] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:f4:da:75 Flags:up|broadcast|multicast|running}
	I0408 23:48:39.198845    7680 ip.go:214] interface addr: fe80::e8ab:9cc6:22b1:a5fc/64
	I0408 23:48:39.198845    7680 ip.go:214] interface addr: 192.168.112.1/20
	I0408 23:48:39.209721    7680 ssh_runner.go:195] Run: grep 192.168.112.1	host.minikube.internal$ /etc/hosts
	I0408 23:48:39.214681    7680 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.112.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 23:48:39.250983    7680 kubeadm.go:883] updating cluster {Name:ha-061400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-061400 Namespac
e:default APIServerHAVIP:192.168.127.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.119.206 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0408 23:48:39.251363    7680 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0408 23:48:39.259754    7680 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0408 23:48:39.281668    7680 docker.go:689] Got preloaded images: 
	I0408 23:48:39.281668    7680 docker.go:695] registry.k8s.io/kube-apiserver:v1.32.2 wasn't preloaded
	I0408 23:48:39.294343    7680 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0408 23:48:39.322987    7680 ssh_runner.go:195] Run: which lz4
	I0408 23:48:39.329635    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0408 23:48:39.344116    7680 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0408 23:48:39.353323    7680 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0408 23:48:39.353323    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (349803115 bytes)
	I0408 23:48:41.140315    7680 docker.go:653] duration metric: took 1.8103388s to copy over tarball
	I0408 23:48:41.151698    7680 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0408 23:48:49.871956    7680 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.7201436s)
	I0408 23:48:49.871956    7680 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0408 23:48:49.933325    7680 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0408 23:48:49.951679    7680 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0408 23:48:49.992466    7680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:48:50.233880    7680 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0408 23:48:53.353436    7680 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.1194364s)
	I0408 23:48:53.364472    7680 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0408 23:48:53.395672    7680 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.2
	registry.k8s.io/kube-controller-manager:v1.32.2
	registry.k8s.io/kube-scheduler:v1.32.2
	registry.k8s.io/kube-proxy:v1.32.2
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0408 23:48:53.395807    7680 cache_images.go:84] Images are preloaded, skipping loading
	I0408 23:48:53.395866    7680 kubeadm.go:934] updating node { 192.168.119.206 8443 v1.32.2 docker true true} ...
	I0408 23:48:53.395933    7680 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-061400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.119.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-061400 Namespace:default APIServerHAVIP:192.168.127.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 23:48:53.405093    7680 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0408 23:48:53.465284    7680 cni.go:84] Creating CNI manager for ""
	I0408 23:48:53.465353    7680 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0408 23:48:53.465401    7680 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0408 23:48:53.465452    7680 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.119.206 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-061400 NodeName:ha-061400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.119.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.119.206 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0408 23:48:53.465711    7680 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.119.206
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-061400"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.119.206"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.119.206"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0408 23:48:53.465816    7680 kube-vip.go:115] generating kube-vip config ...
	I0408 23:48:53.477256    7680 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0408 23:48:53.504883    7680 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0408 23:48:53.505049    7680 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.127.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0408 23:48:53.516288    7680 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0408 23:48:53.529501    7680 binaries.go:44] Found k8s binaries, skipping transfer
	I0408 23:48:53.540318    7680 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0408 23:48:53.556320    7680 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0408 23:48:53.588746    7680 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 23:48:53.621791    7680 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2296 bytes)
	I0408 23:48:53.657096    7680 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1449 bytes)
	I0408 23:48:53.704555    7680 ssh_runner.go:195] Run: grep 192.168.127.254	control-plane.minikube.internal$ /etc/hosts
	I0408 23:48:53.716343    7680 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.127.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 23:48:53.745142    7680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:48:53.933155    7680 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 23:48:53.962295    7680 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400 for IP: 192.168.119.206
	I0408 23:48:53.962295    7680 certs.go:194] generating shared ca certs ...
	I0408 23:48:53.962357    7680 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 23:48:53.963446    7680 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0408 23:48:53.963923    7680 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0408 23:48:53.964229    7680 certs.go:256] generating profile certs ...
	I0408 23:48:53.965024    7680 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\client.key
	I0408 23:48:53.965309    7680 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\client.crt with IP's: []
	I0408 23:48:54.258874    7680 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\client.crt ...
	I0408 23:48:54.258874    7680 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\client.crt: {Name:mke2bc007cddace728408cfa573486bd1946f7c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 23:48:54.260517    7680 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\client.key ...
	I0408 23:48:54.261162    7680 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\client.key: {Name:mk9ee30629538570a76961b95a9be009f3ff090b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 23:48:54.262652    7680 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key.fe0ed964
	I0408 23:48:54.262652    7680 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt.fe0ed964 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.119.206 192.168.127.254]
	I0408 23:48:54.864367    7680 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt.fe0ed964 ...
	I0408 23:48:54.864367    7680 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt.fe0ed964: {Name:mk154aafd603f4e1a5f8bfb5dc76325526227ffe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 23:48:54.865821    7680 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key.fe0ed964 ...
	I0408 23:48:54.865821    7680 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key.fe0ed964: {Name:mk32887fc5c7c23fab60f22f907cc887cf8f8d4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 23:48:54.866158    7680 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt.fe0ed964 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt
	I0408 23:48:54.887265    7680 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key.fe0ed964 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key
	I0408 23:48:54.888951    7680 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\proxy-client.key
	I0408 23:48:54.889061    7680 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\proxy-client.crt with IP's: []
	I0408 23:48:55.335597    7680 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\proxy-client.crt ...
	I0408 23:48:55.335597    7680 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\proxy-client.crt: {Name:mk3f541cb97fbe77652a4540a6c8315ef59d8cdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 23:48:55.337926    7680 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\proxy-client.key ...
	I0408 23:48:55.337926    7680 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\proxy-client.key: {Name:mk7e3ee8dd9016b2873628e06d7b062b75eebac7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 23:48:55.339800    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0408 23:48:55.340076    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0408 23:48:55.340224    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0408 23:48:55.340404    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0408 23:48:55.340541    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0408 23:48:55.340712    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0408 23:48:55.340832    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0408 23:48:55.352112    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0408 23:48:55.353643    7680 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9864.pem (1338 bytes)
	W0408 23:48:55.354147    7680 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9864_empty.pem, impossibly tiny 0 bytes
	I0408 23:48:55.354147    7680 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0408 23:48:55.354564    7680 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0408 23:48:55.354564    7680 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0408 23:48:55.354564    7680 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0408 23:48:55.354564    7680 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem (1708 bytes)
	I0408 23:48:55.355872    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0408 23:48:55.356307    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9864.pem -> /usr/share/ca-certificates/9864.pem
	I0408 23:48:55.356500    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> /usr/share/ca-certificates/98642.pem
	I0408 23:48:55.357777    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 23:48:55.403173    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 23:48:55.448232    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 23:48:55.491432    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0408 23:48:55.532048    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0408 23:48:55.573677    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0408 23:48:55.617908    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 23:48:55.661926    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0408 23:48:55.707719    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 23:48:55.753804    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9864.pem --> /usr/share/ca-certificates/9864.pem (1338 bytes)
	I0408 23:48:55.805108    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem --> /usr/share/ca-certificates/98642.pem (1708 bytes)
	I0408 23:48:55.858228    7680 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0408 23:48:55.900915    7680 ssh_runner.go:195] Run: openssl version
	I0408 23:48:55.919761    7680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 23:48:55.949886    7680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 23:48:55.956943    7680 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0408 23:48:55.967634    7680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 23:48:55.989262    7680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 23:48:56.016429    7680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9864.pem && ln -fs /usr/share/ca-certificates/9864.pem /etc/ssl/certs/9864.pem"
	I0408 23:48:56.045374    7680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9864.pem
	I0408 23:48:56.051787    7680 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 23:04 /usr/share/ca-certificates/9864.pem
	I0408 23:48:56.062427    7680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9864.pem
	I0408 23:48:56.082891    7680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9864.pem /etc/ssl/certs/51391683.0"
	I0408 23:48:56.112156    7680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98642.pem && ln -fs /usr/share/ca-certificates/98642.pem /etc/ssl/certs/98642.pem"
	I0408 23:48:56.142896    7680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98642.pem
	I0408 23:48:56.149490    7680 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 23:04 /usr/share/ca-certificates/98642.pem
	I0408 23:48:56.159595    7680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98642.pem
	I0408 23:48:56.178928    7680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/98642.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 23:48:56.210170    7680 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 23:48:56.216412    7680 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0408 23:48:56.216849    7680 kubeadm.go:392] StartCluster: {Name:ha-061400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-061400 Namespace:d
efault APIServerHAVIP:192.168.127.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.119.206 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 23:48:56.226234    7680 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0408 23:48:56.258393    7680 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0408 23:48:56.286269    7680 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0408 23:48:56.315318    7680 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0408 23:48:56.331828    7680 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0408 23:48:56.331877    7680 kubeadm.go:157] found existing configuration files:
	
	I0408 23:48:56.343082    7680 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0408 23:48:56.360774    7680 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0408 23:48:56.371868    7680 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0408 23:48:56.400765    7680 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0408 23:48:56.415776    7680 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0408 23:48:56.427526    7680 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0408 23:48:56.457921    7680 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0408 23:48:56.482073    7680 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0408 23:48:56.492371    7680 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0408 23:48:56.519682    7680 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0408 23:48:56.534855    7680 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0408 23:48:56.546009    7680 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0408 23:48:56.566218    7680 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0408 23:48:57.030140    7680 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0408 23:49:11.683504    7680 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0408 23:49:11.683714    7680 kubeadm.go:310] [preflight] Running pre-flight checks
	I0408 23:49:11.683925    7680 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0408 23:49:11.684358    7680 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0408 23:49:11.684592    7680 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0408 23:49:11.684897    7680 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0408 23:49:11.690342    7680 out.go:235]   - Generating certificates and keys ...
	I0408 23:49:11.690342    7680 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0408 23:49:11.690342    7680 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0408 23:49:11.691079    7680 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0408 23:49:11.691205    7680 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0408 23:49:11.691205    7680 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0408 23:49:11.691205    7680 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0408 23:49:11.691733    7680 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0408 23:49:11.692018    7680 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-061400 localhost] and IPs [192.168.119.206 127.0.0.1 ::1]
	I0408 23:49:11.692018    7680 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0408 23:49:11.692018    7680 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-061400 localhost] and IPs [192.168.119.206 127.0.0.1 ::1]
	I0408 23:49:11.692637    7680 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0408 23:49:11.692974    7680 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0408 23:49:11.692974    7680 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0408 23:49:11.692974    7680 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0408 23:49:11.692974    7680 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0408 23:49:11.693587    7680 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0408 23:49:11.693587    7680 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0408 23:49:11.693854    7680 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0408 23:49:11.693854    7680 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0408 23:49:11.693854    7680 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0408 23:49:11.694396    7680 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0408 23:49:11.697634    7680 out.go:235]   - Booting up control plane ...
	I0408 23:49:11.698218    7680 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0408 23:49:11.698218    7680 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0408 23:49:11.698218    7680 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0408 23:49:11.698218    7680 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0408 23:49:11.698897    7680 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0408 23:49:11.699159    7680 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0408 23:49:11.699557    7680 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0408 23:49:11.699928    7680 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0408 23:49:11.700093    7680 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002145099s
	I0408 23:49:11.700271    7680 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0408 23:49:11.700418    7680 kubeadm.go:310] [api-check] The API server is healthy after 8.743297649s
	I0408 23:49:11.700745    7680 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0408 23:49:11.701110    7680 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0408 23:49:11.701259    7680 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0408 23:49:11.701734    7680 kubeadm.go:310] [mark-control-plane] Marking the node ha-061400 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0408 23:49:11.701839    7680 kubeadm.go:310] [bootstrap-token] Using token: 1oehw4.v0ilnzd04t5ken5b
	I0408 23:49:11.704323    7680 out.go:235]   - Configuring RBAC rules ...
	I0408 23:49:11.704717    7680 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0408 23:49:11.704717    7680 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0408 23:49:11.704717    7680 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0408 23:49:11.705452    7680 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0408 23:49:11.705714    7680 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0408 23:49:11.705714    7680 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0408 23:49:11.706246    7680 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0408 23:49:11.706364    7680 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0408 23:49:11.706519    7680 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0408 23:49:11.706519    7680 kubeadm.go:310] 
	I0408 23:49:11.706625    7680 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0408 23:49:11.706625    7680 kubeadm.go:310] 
	I0408 23:49:11.706625    7680 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0408 23:49:11.706625    7680 kubeadm.go:310] 
	I0408 23:49:11.706625    7680 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0408 23:49:11.706625    7680 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0408 23:49:11.707255    7680 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0408 23:49:11.707255    7680 kubeadm.go:310] 
	I0408 23:49:11.707255    7680 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0408 23:49:11.707255    7680 kubeadm.go:310] 
	I0408 23:49:11.707255    7680 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0408 23:49:11.707255    7680 kubeadm.go:310] 
	I0408 23:49:11.707255    7680 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0408 23:49:11.707255    7680 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0408 23:49:11.707957    7680 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0408 23:49:11.707957    7680 kubeadm.go:310] 
	I0408 23:49:11.708070    7680 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0408 23:49:11.708070    7680 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0408 23:49:11.708070    7680 kubeadm.go:310] 
	I0408 23:49:11.708070    7680 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1oehw4.v0ilnzd04t5ken5b \
	I0408 23:49:11.708628    7680 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa5a4dda055a1a4ae6c54f5bc7c6626b2903d2da5858116de66a68e5e1fbf334 \
	I0408 23:49:11.708702    7680 kubeadm.go:310] 	--control-plane 
	I0408 23:49:11.708747    7680 kubeadm.go:310] 
	I0408 23:49:11.708837    7680 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0408 23:49:11.708837    7680 kubeadm.go:310] 
	I0408 23:49:11.708944    7680 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1oehw4.v0ilnzd04t5ken5b \
	I0408 23:49:11.709205    7680 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa5a4dda055a1a4ae6c54f5bc7c6626b2903d2da5858116de66a68e5e1fbf334 
	I0408 23:49:11.709205    7680 cni.go:84] Creating CNI manager for ""
	I0408 23:49:11.709205    7680 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0408 23:49:11.712548    7680 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0408 23:49:11.724293    7680 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0408 23:49:11.733136    7680 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0408 23:49:11.733136    7680 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0408 23:49:11.775152    7680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0408 23:49:12.523033    7680 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0408 23:49:12.537006    7680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 23:49:12.537006    7680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-061400 minikube.k8s.io/updated_at=2025_04_08T23_49_12_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=fd2f4c3eba2bd452b5997c855e28d0966165ba83 minikube.k8s.io/name=ha-061400 minikube.k8s.io/primary=true
	I0408 23:49:12.553298    7680 ops.go:34] apiserver oom_adj: -16
	I0408 23:49:12.770225    7680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 23:49:13.272611    7680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 23:49:13.770168    7680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 23:49:14.268831    7680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 23:49:14.770096    7680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0408 23:49:14.957462    7680 kubeadm.go:1113] duration metric: took 2.4341643s to wait for elevateKubeSystemPrivileges
	I0408 23:49:14.957634    7680 kubeadm.go:394] duration metric: took 18.7405399s to StartCluster
	I0408 23:49:14.957787    7680 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 23:49:14.958089    7680 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0408 23:49:14.959926    7680 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 23:49:14.961014    7680 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0408 23:49:14.961014    7680 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.119.206 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 23:49:14.961014    7680 start.go:241] waiting for startup goroutines ...
	I0408 23:49:14.961014    7680 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0408 23:49:14.961689    7680 addons.go:69] Setting storage-provisioner=true in profile "ha-061400"
	I0408 23:49:14.961689    7680 addons.go:69] Setting default-storageclass=true in profile "ha-061400"
	I0408 23:49:14.961796    7680 addons.go:238] Setting addon storage-provisioner=true in "ha-061400"
	I0408 23:49:14.961796    7680 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-061400"
	I0408 23:49:14.961796    7680 host.go:66] Checking if "ha-061400" exists ...
	I0408 23:49:14.961796    7680 config.go:182] Loaded profile config "ha-061400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0408 23:49:14.961796    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:49:14.961796    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:49:15.179113    7680 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.112.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0408 23:49:15.545714    7680 start.go:971] {"host.minikube.internal": 192.168.112.1} host record injected into CoreDNS's ConfigMap
	I0408 23:49:17.281961    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:49:17.282912    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:49:17.283023    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:49:17.283147    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:49:17.284187    7680 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0408 23:49:17.284938    7680 kapi.go:59] client config for ha-061400: &rest.Config{Host:"https://192.168.127.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-061400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-061400\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2809400), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0408 23:49:17.285909    7680 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0408 23:49:17.286944    7680 cert_rotation.go:140] Starting client certificate rotation controller
	I0408 23:49:17.287048    7680 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0408 23:49:17.287048    7680 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0408 23:49:17.287048    7680 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0408 23:49:17.287048    7680 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0408 23:49:17.288581    7680 addons.go:238] Setting addon default-storageclass=true in "ha-061400"
	I0408 23:49:17.288581    7680 host.go:66] Checking if "ha-061400" exists ...
	I0408 23:49:17.288581    7680 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 23:49:17.288789    7680 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0408 23:49:17.288962    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:49:17.289848    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:49:19.810344    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:49:19.810344    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:49:19.810344    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:49:19.864377    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:49:19.864377    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:49:19.864377    7680 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0408 23:49:19.864377    7680 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0408 23:49:19.864964    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:49:22.108017    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:49:22.108017    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:49:22.108017    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:49:22.551998    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:49:22.552081    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:49:22.552510    7680 sshutil.go:53] new ssh client: &{IP:192.168.119.206 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400\id_rsa Username:docker}
	I0408 23:49:22.703821    7680 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0408 23:49:24.735038    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:49:24.736053    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:49:24.736177    7680 sshutil.go:53] new ssh client: &{IP:192.168.119.206 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400\id_rsa Username:docker}
	I0408 23:49:24.863889    7680 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0408 23:49:25.013362    7680 round_trippers.go:470] GET https://192.168.127.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0408 23:49:25.013362    7680 round_trippers.go:476] Request Headers:
	I0408 23:49:25.013362    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:49:25.013362    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:49:25.030435    7680 round_trippers.go:581] Response Status: 200 OK in 17 milliseconds
	I0408 23:49:25.030978    7680 round_trippers.go:470] PUT https://192.168.127.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0408 23:49:25.030978    7680 round_trippers.go:476] Request Headers:
	I0408 23:49:25.030978    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:49:25.030978    7680 round_trippers.go:480]     Content-Type: application/vnd.kubernetes.protobuf
	I0408 23:49:25.030978    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:49:25.056875    7680 round_trippers.go:581] Response Status: 200 OK in 25 milliseconds
	I0408 23:49:25.064913    7680 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0408 23:49:25.067861    7680 addons.go:514] duration metric: took 10.106714s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0408 23:49:25.067861    7680 start.go:246] waiting for cluster config update ...
	I0408 23:49:25.067861    7680 start.go:255] writing updated cluster config ...
	I0408 23:49:25.071572    7680 out.go:201] 
	I0408 23:49:25.086301    7680 config.go:182] Loaded profile config "ha-061400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0408 23:49:25.086504    7680 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\config.json ...
	I0408 23:49:25.094561    7680 out.go:177] * Starting "ha-061400-m02" control-plane node in "ha-061400" cluster
	I0408 23:49:25.100604    7680 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0408 23:49:25.100604    7680 cache.go:56] Caching tarball of preloaded images
	I0408 23:49:25.100604    7680 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0408 23:49:25.100604    7680 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0408 23:49:25.100604    7680 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\config.json ...
	I0408 23:49:25.105820    7680 start.go:360] acquireMachinesLock for ha-061400-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 23:49:25.105820    7680 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-061400-m02"
	I0408 23:49:25.106660    7680 start.go:93] Provisioning new machine with config: &{Name:ha-061400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName
:ha-061400 Namespace:default APIServerHAVIP:192.168.127.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.119.206 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 23:49:25.106660    7680 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0408 23:49:25.110298    7680 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 23:49:25.111132    7680 start.go:159] libmachine.API.Create for "ha-061400" (driver="hyperv")
	I0408 23:49:25.111194    7680 client.go:168] LocalClient.Create starting
	I0408 23:49:25.111418    7680 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0408 23:49:25.111418    7680 main.go:141] libmachine: Decoding PEM data...
	I0408 23:49:25.111872    7680 main.go:141] libmachine: Parsing certificate...
	I0408 23:49:25.112043    7680 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0408 23:49:25.112233    7680 main.go:141] libmachine: Decoding PEM data...
	I0408 23:49:25.112233    7680 main.go:141] libmachine: Parsing certificate...
	I0408 23:49:25.112233    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0408 23:49:26.936855    7680 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0408 23:49:26.936855    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:49:26.937643    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0408 23:49:28.638933    7680 main.go:141] libmachine: [stdout =====>] : False
	
	I0408 23:49:28.639295    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:49:28.639295    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0408 23:49:30.094362    7680 main.go:141] libmachine: [stdout =====>] : True
	
	I0408 23:49:30.094985    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:49:30.095069    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0408 23:49:33.636198    7680 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0408 23:49:33.637009    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:49:33.639567    7680 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0408 23:49:34.116352    7680 main.go:141] libmachine: Creating SSH key...
	I0408 23:49:34.453600    7680 main.go:141] libmachine: Creating VM...
	I0408 23:49:34.453600    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0408 23:49:37.254787    7680 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0408 23:49:37.255179    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:49:37.255179    7680 main.go:141] libmachine: Using switch "Default Switch"
	I0408 23:49:37.255287    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0408 23:49:39.035903    7680 main.go:141] libmachine: [stdout =====>] : True
	
	I0408 23:49:39.036099    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:49:39.036099    7680 main.go:141] libmachine: Creating VHD
	I0408 23:49:39.036099    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0408 23:49:42.893446    7680 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 8657F626-CBAE-4F1A-B23A-DAAD31A1A26E
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0408 23:49:42.893983    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:49:42.894261    7680 main.go:141] libmachine: Writing magic tar header
	I0408 23:49:42.894713    7680 main.go:141] libmachine: Writing SSH key tar header
	I0408 23:49:42.906778    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0408 23:49:46.032619    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:49:46.032619    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:49:46.032619    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m02\disk.vhd' -SizeBytes 20000MB
	I0408 23:49:48.562045    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:49:48.562181    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:49:48.562181    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-061400-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0408 23:49:52.144403    7680 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-061400-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0408 23:49:52.145409    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:49:52.145453    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-061400-m02 -DynamicMemoryEnabled $false
	I0408 23:49:54.396449    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:49:54.396449    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:49:54.396449    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-061400-m02 -Count 2
	I0408 23:49:56.534316    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:49:56.534462    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:49:56.534462    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-061400-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m02\boot2docker.iso'
	I0408 23:49:59.066462    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:49:59.066847    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:49:59.066847    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-061400-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m02\disk.vhd'
	I0408 23:50:01.663129    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:50:01.663322    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:01.663322    7680 main.go:141] libmachine: Starting VM...
	I0408 23:50:01.663322    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-061400-m02
	I0408 23:50:04.686552    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:50:04.687694    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:04.687694    7680 main.go:141] libmachine: Waiting for host to start...
	I0408 23:50:04.687694    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:50:06.932925    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:50:06.932925    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:06.932925    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:50:09.496559    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:50:09.496559    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:10.497407    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:50:12.728042    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:50:12.728042    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:12.728513    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:50:15.219276    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:50:15.219276    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:16.220503    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:50:18.411416    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:50:18.411648    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:18.411648    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:50:20.967316    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:50:20.967316    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:21.967639    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:50:24.253469    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:50:24.253469    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:24.253604    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:50:26.826934    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:50:26.827350    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:27.828496    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:50:30.069749    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:50:30.070701    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:30.070701    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:50:32.670259    7680 main.go:141] libmachine: [stdout =====>] : 192.168.118.215
	
	I0408 23:50:32.670259    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:32.670946    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:50:34.843064    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:50:34.843064    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:34.843804    7680 machine.go:93] provisionDockerMachine start ...
	I0408 23:50:34.843804    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:50:37.001022    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:50:37.001828    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:37.001828    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:50:39.560208    7680 main.go:141] libmachine: [stdout =====>] : 192.168.118.215
	
	I0408 23:50:39.560993    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:39.566823    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:50:39.581592    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.118.215 22 <nil> <nil>}
	I0408 23:50:39.581738    7680 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 23:50:39.717615    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0408 23:50:39.717615    7680 buildroot.go:166] provisioning hostname "ha-061400-m02"
	I0408 23:50:39.717615    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:50:41.897975    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:50:41.898315    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:41.898315    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:50:44.431012    7680 main.go:141] libmachine: [stdout =====>] : 192.168.118.215
	
	I0408 23:50:44.431012    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:44.438131    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:50:44.438245    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.118.215 22 <nil> <nil>}
	I0408 23:50:44.438841    7680 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-061400-m02 && echo "ha-061400-m02" | sudo tee /etc/hostname
	I0408 23:50:44.606274    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-061400-m02
	
	I0408 23:50:44.606274    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:50:46.692428    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:50:46.692532    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:46.692532    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:50:49.224140    7680 main.go:141] libmachine: [stdout =====>] : 192.168.118.215
	
	I0408 23:50:49.224140    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:49.231721    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:50:49.232272    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.118.215 22 <nil> <nil>}
	I0408 23:50:49.232354    7680 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-061400-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-061400-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-061400-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 23:50:49.398744    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 23:50:49.398949    7680 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0408 23:50:49.399084    7680 buildroot.go:174] setting up certificates
	I0408 23:50:49.399159    7680 provision.go:84] configureAuth start
	I0408 23:50:49.399276    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:50:51.551311    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:50:51.552323    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:51.552540    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:50:54.199621    7680 main.go:141] libmachine: [stdout =====>] : 192.168.118.215
	
	I0408 23:50:54.199621    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:54.199621    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:50:56.376626    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:50:56.376626    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:56.377257    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:50:58.926832    7680 main.go:141] libmachine: [stdout =====>] : 192.168.118.215
	
	I0408 23:50:58.926832    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:50:58.926832    7680 provision.go:143] copyHostCerts
	I0408 23:50:58.927024    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0408 23:50:58.927330    7680 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0408 23:50:58.927419    7680 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0408 23:50:58.927943    7680 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0408 23:50:58.929186    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0408 23:50:58.929464    7680 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0408 23:50:58.929464    7680 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0408 23:50:58.929851    7680 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0408 23:50:58.930960    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0408 23:50:58.931318    7680 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0408 23:50:58.931318    7680 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0408 23:50:58.931662    7680 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0408 23:50:58.932280    7680 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-061400-m02 san=[127.0.0.1 192.168.118.215 ha-061400-m02 localhost minikube]
	I0408 23:50:59.298698    7680 provision.go:177] copyRemoteCerts
	I0408 23:50:59.311822    7680 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 23:50:59.311822    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:51:01.413791    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:51:01.413791    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:01.413885    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:51:03.905233    7680 main.go:141] libmachine: [stdout =====>] : 192.168.118.215
	
	I0408 23:51:03.905233    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:03.905233    7680 sshutil.go:53] new ssh client: &{IP:192.168.118.215 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m02\id_rsa Username:docker}
	I0408 23:51:04.008672    7680 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6967878s)
	I0408 23:51:04.008672    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0408 23:51:04.009297    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0408 23:51:04.054930    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0408 23:51:04.054930    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0408 23:51:04.106383    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0408 23:51:04.107015    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0408 23:51:04.149801    7680 provision.go:87] duration metric: took 14.7504488s to configureAuth
	I0408 23:51:04.149801    7680 buildroot.go:189] setting minikube options for container-runtime
	I0408 23:51:04.149801    7680 config.go:182] Loaded profile config "ha-061400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0408 23:51:04.149801    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:51:06.260659    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:51:06.260659    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:06.260659    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:51:08.809271    7680 main.go:141] libmachine: [stdout =====>] : 192.168.118.215
	
	I0408 23:51:08.809271    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:08.815428    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:51:08.816103    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.118.215 22 <nil> <nil>}
	I0408 23:51:08.816103    7680 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0408 23:51:08.961881    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0408 23:51:08.961881    7680 buildroot.go:70] root file system type: tmpfs
	I0408 23:51:08.961881    7680 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0408 23:51:08.961881    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:51:11.080770    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:51:11.080830    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:11.080969    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:51:13.647629    7680 main.go:141] libmachine: [stdout =====>] : 192.168.118.215
	
	I0408 23:51:13.647629    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:13.655078    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:51:13.655838    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.118.215 22 <nil> <nil>}
	I0408 23:51:13.655838    7680 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.119.206"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0408 23:51:13.834752    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.119.206
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0408 23:51:13.834752    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:51:15.953152    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:51:15.953787    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:15.953905    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:51:18.454760    7680 main.go:141] libmachine: [stdout =====>] : 192.168.118.215
	
	I0408 23:51:18.454760    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:18.461020    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:51:18.461172    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.118.215 22 <nil> <nil>}
	I0408 23:51:18.461172    7680 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0408 23:51:20.704335    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0408 23:51:20.704445    7680 machine.go:96] duration metric: took 45.860001s to provisionDockerMachine
	I0408 23:51:20.704445    7680 client.go:171] duration metric: took 1m55.5917338s to LocalClient.Create
	I0408 23:51:20.704507    7680 start.go:167] duration metric: took 1m55.5926909s to libmachine.API.Create "ha-061400"
	I0408 23:51:20.704586    7680 start.go:293] postStartSetup for "ha-061400-m02" (driver="hyperv")
	I0408 23:51:20.704608    7680 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 23:51:20.717095    7680 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 23:51:20.717095    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:51:22.822959    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:51:22.823522    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:22.823522    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:51:25.375870    7680 main.go:141] libmachine: [stdout =====>] : 192.168.118.215
	
	I0408 23:51:25.376714    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:25.376714    7680 sshutil.go:53] new ssh client: &{IP:192.168.118.215 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m02\id_rsa Username:docker}
	I0408 23:51:25.486134    7680 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7688944s)
	I0408 23:51:25.497212    7680 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 23:51:25.504554    7680 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 23:51:25.504554    7680 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0408 23:51:25.505065    7680 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0408 23:51:25.505459    7680 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> 98642.pem in /etc/ssl/certs
	I0408 23:51:25.505459    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> /etc/ssl/certs/98642.pem
	I0408 23:51:25.517073    7680 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 23:51:25.535672    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem --> /etc/ssl/certs/98642.pem (1708 bytes)
	I0408 23:51:25.581421    7680 start.go:296] duration metric: took 4.8767484s for postStartSetup
	I0408 23:51:25.584475    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:51:27.654042    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:51:27.654042    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:27.654731    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:51:30.221944    7680 main.go:141] libmachine: [stdout =====>] : 192.168.118.215
	
	I0408 23:51:30.222279    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:30.222375    7680 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\config.json ...
	I0408 23:51:30.225386    7680 start.go:128] duration metric: took 2m5.1170821s to createHost
	I0408 23:51:30.225386    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:51:32.306219    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:51:32.306219    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:32.306219    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:51:34.793264    7680 main.go:141] libmachine: [stdout =====>] : 192.168.118.215
	
	I0408 23:51:34.794046    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:34.799581    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:51:34.800164    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.118.215 22 <nil> <nil>}
	I0408 23:51:34.800214    7680 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0408 23:51:34.935220    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744156294.961864315
	
	I0408 23:51:34.935220    7680 fix.go:216] guest clock: 1744156294.961864315
	I0408 23:51:34.935220    7680 fix.go:229] Guest: 2025-04-08 23:51:34.961864315 +0000 UTC Remote: 2025-04-08 23:51:30.2253864 +0000 UTC m=+324.590838901 (delta=4.736477915s)
	I0408 23:51:34.935220    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:51:36.991554    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:51:36.991554    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:36.991641    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:51:39.518867    7680 main.go:141] libmachine: [stdout =====>] : 192.168.118.215
	
	I0408 23:51:39.518867    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:39.524967    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:51:39.525498    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.118.215 22 <nil> <nil>}
	I0408 23:51:39.525498    7680 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1744156294
	I0408 23:51:39.679155    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr  8 23:51:34 UTC 2025
	
	I0408 23:51:39.679155    7680 fix.go:236] clock set: Tue Apr  8 23:51:34 UTC 2025
	 (err=<nil>)
	I0408 23:51:39.679155    7680 start.go:83] releasing machines lock for "ha-061400-m02", held for 2m14.570947s
	I0408 23:51:39.679348    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:51:41.790733    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:51:41.790733    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:41.790733    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:51:44.285376    7680 main.go:141] libmachine: [stdout =====>] : 192.168.118.215
	
	I0408 23:51:44.286080    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:44.289066    7680 out.go:177] * Found network options:
	I0408 23:51:44.292754    7680 out.go:177]   - NO_PROXY=192.168.119.206
	W0408 23:51:44.295422    7680 proxy.go:119] fail to check proxy env: Error ip not in block
	I0408 23:51:44.298087    7680 out.go:177]   - NO_PROXY=192.168.119.206
	W0408 23:51:44.300514    7680 proxy.go:119] fail to check proxy env: Error ip not in block
	W0408 23:51:44.302073    7680 proxy.go:119] fail to check proxy env: Error ip not in block
	I0408 23:51:44.303979    7680 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0408 23:51:44.303979    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:51:44.313538    7680 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0408 23:51:44.313538    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m02 ).state
	I0408 23:51:46.531342    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:51:46.531342    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:46.531342    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:51:46.589783    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:51:46.590295    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:46.590492    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m02 ).networkadapters[0]).ipaddresses[0]
	I0408 23:51:49.153660    7680 main.go:141] libmachine: [stdout =====>] : 192.168.118.215
	
	I0408 23:51:49.153660    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:49.154261    7680 sshutil.go:53] new ssh client: &{IP:192.168.118.215 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m02\id_rsa Username:docker}
	I0408 23:51:49.191618    7680 main.go:141] libmachine: [stdout =====>] : 192.168.118.215
	
	I0408 23:51:49.191618    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:49.191901    7680 sshutil.go:53] new ssh client: &{IP:192.168.118.215 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m02\id_rsa Username:docker}
	I0408 23:51:49.251700    7680 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9476551s)
	W0408 23:51:49.251700    7680 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0408 23:51:49.286590    7680 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.9729866s)
	W0408 23:51:49.286590    7680 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 23:51:49.300131    7680 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 23:51:49.331137    7680 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 23:51:49.331201    7680 start.go:495] detecting cgroup driver to use...
	I0408 23:51:49.331454    7680 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 23:51:49.374914    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0408 23:51:49.405720    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W0408 23:51:49.416815    7680 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0408 23:51:49.416887    7680 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0408 23:51:49.428021    7680 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0408 23:51:49.438732    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0408 23:51:49.468979    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 23:51:49.502834    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0408 23:51:49.530402    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 23:51:49.561734    7680 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 23:51:49.592054    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0408 23:51:49.620273    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0408 23:51:49.649398    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0408 23:51:49.679367    7680 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 23:51:49.696698    7680 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 23:51:49.707474    7680 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 23:51:49.739920    7680 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 23:51:49.768525    7680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:51:49.958388    7680 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0408 23:51:49.990761    7680 start.go:495] detecting cgroup driver to use...
	I0408 23:51:50.002571    7680 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0408 23:51:50.037454    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 23:51:50.068632    7680 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 23:51:50.110899    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 23:51:50.144867    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0408 23:51:50.176622    7680 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0408 23:51:50.236348    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0408 23:51:50.260696    7680 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 23:51:50.306903    7680 ssh_runner.go:195] Run: which cri-dockerd
	I0408 23:51:50.323778    7680 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0408 23:51:50.339812    7680 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0408 23:51:50.390340    7680 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0408 23:51:50.589983    7680 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0408 23:51:50.771160    7680 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0408 23:51:50.771268    7680 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0408 23:51:50.813676    7680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:51:51.014877    7680 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0408 23:51:53.595452    7680 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5805415s)
	I0408 23:51:53.605124    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0408 23:51:53.639109    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0408 23:51:53.676568    7680 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0408 23:51:53.851837    7680 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0408 23:51:54.032978    7680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:51:54.218859    7680 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0408 23:51:54.258094    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0408 23:51:54.290848    7680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:51:54.473830    7680 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0408 23:51:54.582402    7680 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0408 23:51:54.595350    7680 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0408 23:51:54.604136    7680 start.go:563] Will wait 60s for crictl version
	I0408 23:51:54.613815    7680 ssh_runner.go:195] Run: which crictl
	I0408 23:51:54.630092    7680 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 23:51:54.685019    7680 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0408 23:51:54.695653    7680 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0408 23:51:54.736307    7680 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0408 23:51:54.775694    7680 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 27.4.0 ...
	I0408 23:51:54.779495    7680 out.go:177]   - env NO_PROXY=192.168.119.206
	I0408 23:51:54.782726    7680 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0408 23:51:54.786962    7680 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0408 23:51:54.786962    7680 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0408 23:51:54.786962    7680 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0408 23:51:54.786962    7680 ip.go:211] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:f4:da:75 Flags:up|broadcast|multicast|running}
	I0408 23:51:54.789904    7680 ip.go:214] interface addr: fe80::e8ab:9cc6:22b1:a5fc/64
	I0408 23:51:54.789904    7680 ip.go:214] interface addr: 192.168.112.1/20
	I0408 23:51:54.799936    7680 ssh_runner.go:195] Run: grep 192.168.112.1	host.minikube.internal$ /etc/hosts
	I0408 23:51:54.806531    7680 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.112.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 23:51:54.826947    7680 mustload.go:65] Loading cluster: ha-061400
	I0408 23:51:54.827195    7680 config.go:182] Loaded profile config "ha-061400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0408 23:51:54.828344    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:51:56.934099    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:51:56.934163    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:56.934163    7680 host.go:66] Checking if "ha-061400" exists ...
	I0408 23:51:56.934964    7680 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400 for IP: 192.168.118.215
	I0408 23:51:56.934964    7680 certs.go:194] generating shared ca certs ...
	I0408 23:51:56.934964    7680 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 23:51:56.935885    7680 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0408 23:51:56.936465    7680 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0408 23:51:56.936684    7680 certs.go:256] generating profile certs ...
	I0408 23:51:56.936979    7680 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\client.key
	I0408 23:51:56.937585    7680 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key.b63c1d01
	I0408 23:51:56.937644    7680 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt.b63c1d01 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.119.206 192.168.118.215 192.168.127.254]
	I0408 23:51:57.251981    7680 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt.b63c1d01 ...
	I0408 23:51:57.251981    7680 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt.b63c1d01: {Name:mk302d2222fa2b96163094148d492cc5223092ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 23:51:57.251981    7680 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key.b63c1d01 ...
	I0408 23:51:57.251981    7680 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key.b63c1d01: {Name:mk852e0eda79569f305cf26eff880333ce4f458a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 23:51:57.251981    7680 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt.b63c1d01 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt
	I0408 23:51:57.277431    7680 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key.b63c1d01 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key
	I0408 23:51:57.279302    7680 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\proxy-client.key
	I0408 23:51:57.279302    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0408 23:51:57.279493    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0408 23:51:57.279708    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0408 23:51:57.279832    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0408 23:51:57.280032    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0408 23:51:57.280151    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0408 23:51:57.280151    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0408 23:51:57.280151    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0408 23:51:57.280995    7680 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9864.pem (1338 bytes)
	W0408 23:51:57.281583    7680 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9864_empty.pem, impossibly tiny 0 bytes
	I0408 23:51:57.281734    7680 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0408 23:51:57.282139    7680 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0408 23:51:57.282439    7680 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0408 23:51:57.282735    7680 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0408 23:51:57.283168    7680 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem (1708 bytes)
	I0408 23:51:57.283168    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0408 23:51:57.283865    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9864.pem -> /usr/share/ca-certificates/9864.pem
	I0408 23:51:57.283934    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> /usr/share/ca-certificates/98642.pem
	I0408 23:51:57.284306    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:51:59.454137    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:51:59.454137    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:51:59.454137    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:52:01.939992    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:52:01.939992    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:52:01.941850    7680 sshutil.go:53] new ssh client: &{IP:192.168.119.206 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400\id_rsa Username:docker}
	I0408 23:52:02.052842    7680 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0408 23:52:02.061481    7680 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0408 23:52:02.090806    7680 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0408 23:52:02.099584    7680 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0408 23:52:02.133515    7680 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0408 23:52:02.140950    7680 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0408 23:52:02.175432    7680 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0408 23:52:02.181931    7680 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0408 23:52:02.215461    7680 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0408 23:52:02.223444    7680 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0408 23:52:02.263407    7680 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0408 23:52:02.270435    7680 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0408 23:52:02.300178    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 23:52:02.351996    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 23:52:02.404581    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 23:52:02.450228    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0408 23:52:02.496740    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0408 23:52:02.543093    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0408 23:52:02.588336    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 23:52:02.633048    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0408 23:52:02.678867    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 23:52:02.733547    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9864.pem --> /usr/share/ca-certificates/9864.pem (1338 bytes)
	I0408 23:52:02.787720    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem --> /usr/share/ca-certificates/98642.pem (1708 bytes)
	I0408 23:52:02.830338    7680 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0408 23:52:02.861814    7680 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0408 23:52:02.891872    7680 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0408 23:52:02.921895    7680 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0408 23:52:02.952372    7680 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0408 23:52:02.986801    7680 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0408 23:52:03.019812    7680 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0408 23:52:03.068330    7680 ssh_runner.go:195] Run: openssl version
	I0408 23:52:03.088144    7680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98642.pem && ln -fs /usr/share/ca-certificates/98642.pem /etc/ssl/certs/98642.pem"
	I0408 23:52:03.120271    7680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98642.pem
	I0408 23:52:03.127455    7680 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 23:04 /usr/share/ca-certificates/98642.pem
	I0408 23:52:03.139120    7680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98642.pem
	I0408 23:52:03.161640    7680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/98642.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 23:52:03.193296    7680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 23:52:03.224195    7680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 23:52:03.232335    7680 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0408 23:52:03.242574    7680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 23:52:03.262747    7680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 23:52:03.294542    7680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9864.pem && ln -fs /usr/share/ca-certificates/9864.pem /etc/ssl/certs/9864.pem"
	I0408 23:52:03.326329    7680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9864.pem
	I0408 23:52:03.333746    7680 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 23:04 /usr/share/ca-certificates/9864.pem
	I0408 23:52:03.345174    7680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9864.pem
	I0408 23:52:03.364942    7680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9864.pem /etc/ssl/certs/51391683.0"
	I0408 23:52:03.399479    7680 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 23:52:03.407531    7680 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0408 23:52:03.407531    7680 kubeadm.go:934] updating node {m02 192.168.118.215 8443 v1.32.2 docker true true} ...
	I0408 23:52:03.408059    7680 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-061400-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.118.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-061400 Namespace:default APIServerHAVIP:192.168.127.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 23:52:03.408059    7680 kube-vip.go:115] generating kube-vip config ...
	I0408 23:52:03.420771    7680 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0408 23:52:03.454516    7680 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0408 23:52:03.454516    7680 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.127.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0408 23:52:03.468702    7680 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0408 23:52:03.487066    7680 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
	
	Initiating transfer...
	I0408 23:52:03.501840    7680 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
	I0408 23:52:03.531453    7680 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubeadm
	I0408 23:52:03.531650    7680 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubectl
	I0408 23:52:03.531650    7680 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubelet
	I0408 23:52:04.987172    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubectl -> /var/lib/minikube/binaries/v1.32.2/kubectl
	I0408 23:52:04.996795    7680 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl
	I0408 23:52:05.003835    7680 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubectl': No such file or directory
	I0408 23:52:05.004793    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubectl --> /var/lib/minikube/binaries/v1.32.2/kubectl (57323672 bytes)
	I0408 23:52:05.231918    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubeadm -> /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0408 23:52:05.242921    7680 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0408 23:52:05.251909    7680 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubeadm': No such file or directory
	I0408 23:52:05.251909    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubeadm --> /var/lib/minikube/binaries/v1.32.2/kubeadm (70942872 bytes)
	I0408 23:52:05.263926    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 23:52:05.319632    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubelet -> /var/lib/minikube/binaries/v1.32.2/kubelet
	I0408 23:52:05.331906    7680 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet
	I0408 23:52:05.348891    7680 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubelet': No such file or directory
	I0408 23:52:05.348958    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubelet --> /var/lib/minikube/binaries/v1.32.2/kubelet (77406468 bytes)
	I0408 23:52:06.270412    7680 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0408 23:52:06.289220    7680 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0408 23:52:06.333913    7680 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 23:52:06.365720    7680 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1443 bytes)
	I0408 23:52:06.411799    7680 ssh_runner.go:195] Run: grep 192.168.127.254	control-plane.minikube.internal$ /etc/hosts
	I0408 23:52:06.417614    7680 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.127.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 23:52:06.453793    7680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:52:06.660845    7680 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 23:52:06.693729    7680 host.go:66] Checking if "ha-061400" exists ...
	I0408 23:52:06.694629    7680 start.go:317] joinCluster: &{Name:ha-061400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-061400 Namespace:def
ault APIServerHAVIP:192.168.127.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.119.206 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.118.215 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 23:52:06.694629    7680 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0408 23:52:06.694629    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:52:08.810408    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:52:08.810408    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:52:08.810408    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:52:11.380606    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:52:11.381491    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:52:11.381491    7680 sshutil.go:53] new ssh client: &{IP:192.168.119.206 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400\id_rsa Username:docker}
	I0408 23:52:11.983301    7680 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm token create --print-join-command --ttl=0": (5.2885242s)
	I0408 23:52:11.983450    7680 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.118.215 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 23:52:11.983565    7680 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 67n8ol.hj0bx7fxbu2j590a --discovery-token-ca-cert-hash sha256:aa5a4dda055a1a4ae6c54f5bc7c6626b2903d2da5858116de66a68e5e1fbf334 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-061400-m02 --control-plane --apiserver-advertise-address=192.168.118.215 --apiserver-bind-port=8443"
	I0408 23:52:52.890535    7680 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 67n8ol.hj0bx7fxbu2j590a --discovery-token-ca-cert-hash sha256:aa5a4dda055a1a4ae6c54f5bc7c6626b2903d2da5858116de66a68e5e1fbf334 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-061400-m02 --control-plane --apiserver-advertise-address=192.168.118.215 --apiserver-bind-port=8443": (40.9064324s)
	I0408 23:52:52.890535    7680 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0408 23:52:53.604714    7680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-061400-m02 minikube.k8s.io/updated_at=2025_04_08T23_52_53_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=fd2f4c3eba2bd452b5997c855e28d0966165ba83 minikube.k8s.io/name=ha-061400 minikube.k8s.io/primary=false
	I0408 23:52:53.780370    7680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-061400-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0408 23:52:53.974016    7680 start.go:319] duration metric: took 47.2787653s to joinCluster
	I0408 23:52:53.975070    7680 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.118.215 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 23:52:53.975859    7680 config.go:182] Loaded profile config "ha-061400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0408 23:52:53.978145    7680 out.go:177] * Verifying Kubernetes components...
	I0408 23:52:53.995071    7680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:52:54.349110    7680 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 23:52:54.386374    7680 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0408 23:52:54.387021    7680 kapi.go:59] client config for ha-061400: &rest.Config{Host:"https://192.168.127.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-061400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-061400\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2809400), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0408 23:52:54.387173    7680 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.127.254:8443 with https://192.168.119.206:8443
	I0408 23:52:54.388356    7680 node_ready.go:35] waiting up to 6m0s for node "ha-061400-m02" to be "Ready" ...
	I0408 23:52:54.388684    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:52:54.388741    7680 round_trippers.go:476] Request Headers:
	I0408 23:52:54.388773    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:52:54.388773    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:52:54.410067    7680 round_trippers.go:581] Response Status: 200 OK in 21 milliseconds
	I0408 23:52:54.888845    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:52:54.889437    7680 round_trippers.go:476] Request Headers:
	I0408 23:52:54.889437    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:52:54.889437    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:52:54.894111    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:52:55.390279    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:52:55.390279    7680 round_trippers.go:476] Request Headers:
	I0408 23:52:55.390279    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:52:55.390279    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:52:55.396588    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:52:55.888932    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:52:55.888932    7680 round_trippers.go:476] Request Headers:
	I0408 23:52:55.888932    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:52:55.888932    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:52:55.895046    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:52:56.389053    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:52:56.389053    7680 round_trippers.go:476] Request Headers:
	I0408 23:52:56.389053    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:52:56.389053    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:52:56.394117    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:52:56.395125    7680 node_ready.go:53] node "ha-061400-m02" has status "Ready":"False"
	I0408 23:52:56.889777    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:52:56.889777    7680 round_trippers.go:476] Request Headers:
	I0408 23:52:56.889777    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:52:56.889777    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:52:56.895742    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:52:57.389060    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:52:57.389060    7680 round_trippers.go:476] Request Headers:
	I0408 23:52:57.389060    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:52:57.389060    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:52:57.393910    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:52:57.889146    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:52:57.889146    7680 round_trippers.go:476] Request Headers:
	I0408 23:52:57.889146    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:52:57.889335    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:52:58.032379    7680 round_trippers.go:581] Response Status: 200 OK in 143 milliseconds
	I0408 23:52:58.389498    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:52:58.389498    7680 round_trippers.go:476] Request Headers:
	I0408 23:52:58.389562    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:52:58.389685    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:52:58.393061    7680 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0408 23:52:58.889096    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:52:58.889096    7680 round_trippers.go:476] Request Headers:
	I0408 23:52:58.889096    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:52:58.889096    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:52:58.895277    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:52:58.896611    7680 node_ready.go:53] node "ha-061400-m02" has status "Ready":"False"
	I0408 23:52:59.388912    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:52:59.388912    7680 round_trippers.go:476] Request Headers:
	I0408 23:52:59.388912    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:52:59.388912    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:52:59.417410    7680 round_trippers.go:581] Response Status: 200 OK in 28 milliseconds
	I0408 23:52:59.888966    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:52:59.888966    7680 round_trippers.go:476] Request Headers:
	I0408 23:52:59.888966    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:52:59.888966    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:52:59.895278    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:53:00.389460    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:00.389460    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:00.389460    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:00.389460    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:00.394065    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:53:00.888774    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:00.888774    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:00.888774    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:00.888774    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:00.895468    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:53:01.389055    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:01.389055    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:01.389055    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:01.389055    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:01.393007    7680 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0408 23:53:01.393997    7680 node_ready.go:53] node "ha-061400-m02" has status "Ready":"False"
	I0408 23:53:01.889420    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:01.889420    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:01.889420    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:01.889420    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:01.896280    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:53:02.389484    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:02.389484    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:02.389484    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:02.389484    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:02.395478    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:53:02.889677    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:02.889739    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:02.889739    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:02.889739    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:02.895057    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:53:03.389064    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:03.389064    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:03.389064    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:03.389064    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:03.393802    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:53:03.890026    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:03.890026    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:03.890026    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:03.890026    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:03.896182    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:53:03.896751    7680 node_ready.go:53] node "ha-061400-m02" has status "Ready":"False"
	I0408 23:53:04.389771    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:04.389811    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:04.389811    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:04.389865    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:04.401590    7680 round_trippers.go:581] Response Status: 200 OK in 11 milliseconds
	I0408 23:53:04.890418    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:04.890418    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:04.890418    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:04.890418    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:04.901358    7680 round_trippers.go:581] Response Status: 200 OK in 10 milliseconds
	I0408 23:53:05.389510    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:05.389510    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:05.389510    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:05.389510    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:05.394479    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:53:05.889453    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:05.889453    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:05.889453    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:05.889453    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:05.895858    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:53:06.389631    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:06.389631    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:06.389631    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:06.389631    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:06.400111    7680 round_trippers.go:581] Response Status: 200 OK in 10 milliseconds
	I0408 23:53:06.400489    7680 node_ready.go:53] node "ha-061400-m02" has status "Ready":"False"
	I0408 23:53:06.888748    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:06.888748    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:06.888748    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:06.888748    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:06.894994    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:53:07.389708    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:07.389780    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:07.389780    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:07.389861    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:07.394665    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:53:07.890273    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:07.890401    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:07.890401    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:07.890401    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:07.896090    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:53:08.389580    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:08.389580    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:08.389580    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:08.389580    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:08.395224    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:53:08.888944    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:08.888944    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:08.888944    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:08.888944    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:08.894268    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:53:08.896008    7680 node_ready.go:53] node "ha-061400-m02" has status "Ready":"False"
	I0408 23:53:09.388721    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:09.388721    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:09.388721    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:09.388721    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:09.394323    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:53:09.889461    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:09.889461    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:09.889461    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:09.889461    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:09.895937    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:53:10.389464    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:10.389510    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:10.389510    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:10.389510    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:10.393909    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:53:10.888939    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:10.888939    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:10.888939    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:10.888939    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:10.895108    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:53:11.388991    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:11.388991    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:11.388991    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:11.388991    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:11.393483    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:53:11.394630    7680 node_ready.go:53] node "ha-061400-m02" has status "Ready":"False"
	I0408 23:53:11.889362    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:11.889362    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:11.889362    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:11.889362    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:11.895192    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:53:12.389187    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:12.389187    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:12.389187    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:12.389187    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:12.400576    7680 round_trippers.go:581] Response Status: 200 OK in 11 milliseconds
	I0408 23:53:12.888833    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:12.888833    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:12.888833    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:12.888833    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:12.894857    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:53:13.389165    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:13.389165    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:13.389165    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:13.389165    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:13.397967    7680 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0408 23:53:13.398762    7680 node_ready.go:53] node "ha-061400-m02" has status "Ready":"False"
	I0408 23:53:13.888933    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:13.888933    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:13.888933    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:13.888933    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:13.895271    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:53:14.389924    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:14.390010    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:14.390010    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:14.390069    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:14.392996    7680 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0408 23:53:14.889808    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:14.889808    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:14.889974    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:14.889974    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:14.895868    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:53:14.896228    7680 node_ready.go:49] node "ha-061400-m02" has status "Ready":"True"
	I0408 23:53:14.896316    7680 node_ready.go:38] duration metric: took 20.50763s for node "ha-061400-m02" to be "Ready" ...
	I0408 23:53:14.896440    7680 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 23:53:14.896633    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods
	I0408 23:53:14.896633    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:14.896747    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:14.896747    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:14.900996    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:53:14.905084    7680 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-rzk8c" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:14.905290    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-rzk8c
	I0408 23:53:14.905290    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:14.905290    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:14.905348    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:14.914639    7680 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0408 23:53:14.915332    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:53:14.915332    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:14.915332    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:14.915332    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:14.919303    7680 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0408 23:53:14.919610    7680 pod_ready.go:93] pod "coredns-668d6bf9bc-rzk8c" in "kube-system" namespace has status "Ready":"True"
	I0408 23:53:14.919702    7680 pod_ready.go:82] duration metric: took 14.6183ms for pod "coredns-668d6bf9bc-rzk8c" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:14.919702    7680 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-scvcr" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:14.919824    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-scvcr
	I0408 23:53:14.919851    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:14.919894    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:14.919894    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:14.924173    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:53:14.924760    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:53:14.924760    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:14.924760    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:14.924760    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:14.928824    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:53:14.929473    7680 pod_ready.go:93] pod "coredns-668d6bf9bc-scvcr" in "kube-system" namespace has status "Ready":"True"
	I0408 23:53:14.929503    7680 pod_ready.go:82] duration metric: took 9.8006ms for pod "coredns-668d6bf9bc-scvcr" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:14.929503    7680 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-061400" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:14.929692    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/etcd-ha-061400
	I0408 23:53:14.929692    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:14.929692    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:14.929692    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:14.932989    7680 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0408 23:53:14.932989    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:53:14.932989    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:14.932989    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:14.932989    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:14.937078    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:53:14.937919    7680 pod_ready.go:93] pod "etcd-ha-061400" in "kube-system" namespace has status "Ready":"True"
	I0408 23:53:14.937919    7680 pod_ready.go:82] duration metric: took 8.3451ms for pod "etcd-ha-061400" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:14.937982    7680 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-061400-m02" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:14.938071    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/etcd-ha-061400-m02
	I0408 23:53:14.938071    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:14.938132    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:14.938132    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:14.945844    7680 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0408 23:53:14.946393    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:14.946393    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:14.946393    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:14.946393    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:14.948579    7680 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0408 23:53:14.949680    7680 pod_ready.go:93] pod "etcd-ha-061400-m02" in "kube-system" namespace has status "Ready":"True"
	I0408 23:53:14.949680    7680 pod_ready.go:82] duration metric: took 11.6982ms for pod "etcd-ha-061400-m02" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:14.949728    7680 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-061400" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:15.089872    7680 request.go:661] Waited for 140.1414ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-061400
	I0408 23:53:15.089872    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-061400
	I0408 23:53:15.089872    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:15.089872    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:15.089872    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:15.100282    7680 round_trippers.go:581] Response Status: 200 OK in 10 milliseconds
	I0408 23:53:15.290287    7680 request.go:661] Waited for 187.8719ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:53:15.290287    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:53:15.290287    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:15.290287    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:15.290287    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:15.300468    7680 round_trippers.go:581] Response Status: 200 OK in 10 milliseconds
	I0408 23:53:15.300612    7680 pod_ready.go:93] pod "kube-apiserver-ha-061400" in "kube-system" namespace has status "Ready":"True"
	I0408 23:53:15.300612    7680 pod_ready.go:82] duration metric: took 350.8788ms for pod "kube-apiserver-ha-061400" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:15.300612    7680 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-061400-m02" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:15.490532    7680 request.go:661] Waited for 189.9175ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-061400-m02
	I0408 23:53:15.490981    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-061400-m02
	I0408 23:53:15.490981    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:15.490981    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:15.491142    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:15.496996    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:53:15.690514    7680 request.go:661] Waited for 193.1252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:15.690514    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:15.690514    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:15.690514    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:15.690514    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:15.696202    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:53:15.696888    7680 pod_ready.go:93] pod "kube-apiserver-ha-061400-m02" in "kube-system" namespace has status "Ready":"True"
	I0408 23:53:15.696888    7680 pod_ready.go:82] duration metric: took 396.2709ms for pod "kube-apiserver-ha-061400-m02" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:15.696888    7680 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-061400" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:15.890032    7680 request.go:661] Waited for 192.6554ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-061400
	I0408 23:53:15.890032    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-061400
	I0408 23:53:15.890526    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:15.890526    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:15.890580    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:15.907152    7680 round_trippers.go:581] Response Status: 200 OK in 16 milliseconds
	I0408 23:53:16.089843    7680 request.go:661] Waited for 181.9354ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:53:16.090291    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:53:16.090291    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:16.090291    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:16.090291    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:16.095941    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:53:16.095941    7680 pod_ready.go:93] pod "kube-controller-manager-ha-061400" in "kube-system" namespace has status "Ready":"True"
	I0408 23:53:16.095941    7680 pod_ready.go:82] duration metric: took 399.0483ms for pod "kube-controller-manager-ha-061400" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:16.095941    7680 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-061400-m02" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:16.290598    7680 request.go:661] Waited for 194.6541ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-061400-m02
	I0408 23:53:16.290598    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-061400-m02
	I0408 23:53:16.290598    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:16.290598    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:16.290598    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:16.296828    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:53:16.489506    7680 request.go:661] Waited for 191.7759ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:16.489506    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:16.489506    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:16.489506    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:16.489506    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:16.495375    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:53:16.495732    7680 pod_ready.go:93] pod "kube-controller-manager-ha-061400-m02" in "kube-system" namespace has status "Ready":"True"
	I0408 23:53:16.495732    7680 pod_ready.go:82] duration metric: took 399.7848ms for pod "kube-controller-manager-ha-061400-m02" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:16.495732    7680 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lr9jb" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:16.689565    7680 request.go:661] Waited for 193.5779ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lr9jb
	I0408 23:53:16.689565    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lr9jb
	I0408 23:53:16.689565    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:16.689565    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:16.689565    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:16.696231    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:53:16.890217    7680 request.go:661] Waited for 192.957ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:53:16.890217    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:53:16.890217    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:16.890217    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:16.890217    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:16.896072    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:53:16.896721    7680 pod_ready.go:93] pod "kube-proxy-lr9jb" in "kube-system" namespace has status "Ready":"True"
	I0408 23:53:16.896721    7680 pod_ready.go:82] duration metric: took 400.798ms for pod "kube-proxy-lr9jb" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:16.896776    7680 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nkwqr" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:17.089757    7680 request.go:661] Waited for 192.8919ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nkwqr
	I0408 23:53:17.089757    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nkwqr
	I0408 23:53:17.090188    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:17.090188    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:17.090188    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:17.095005    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:53:17.289698    7680 request.go:661] Waited for 194.5127ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:17.289698    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:17.289698    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:17.289698    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:17.289698    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:17.297131    7680 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0408 23:53:17.297614    7680 pod_ready.go:93] pod "kube-proxy-nkwqr" in "kube-system" namespace has status "Ready":"True"
	I0408 23:53:17.297667    7680 pod_ready.go:82] duration metric: took 400.8855ms for pod "kube-proxy-nkwqr" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:17.297667    7680 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-061400" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:17.490575    7680 request.go:661] Waited for 192.9054ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-061400
	I0408 23:53:17.491192    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-061400
	I0408 23:53:17.491192    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:17.491192    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:17.491192    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:17.496937    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:53:17.689970    7680 request.go:661] Waited for 192.6087ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:53:17.689970    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:53:17.689970    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:17.689970    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:17.689970    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:17.695445    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:53:17.695781    7680 pod_ready.go:93] pod "kube-scheduler-ha-061400" in "kube-system" namespace has status "Ready":"True"
	I0408 23:53:17.695922    7680 pod_ready.go:82] duration metric: took 398.109ms for pod "kube-scheduler-ha-061400" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:17.695922    7680 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-061400-m02" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:17.889517    7680 request.go:661] Waited for 193.5927ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-061400-m02
	I0408 23:53:17.889517    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-061400-m02
	I0408 23:53:17.889517    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:17.889517    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:17.889517    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:17.894627    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:53:18.090665    7680 request.go:661] Waited for 195.6453ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:18.090665    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:53:18.090665    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:18.090665    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:18.090665    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:18.097490    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:53:18.097977    7680 pod_ready.go:93] pod "kube-scheduler-ha-061400-m02" in "kube-system" namespace has status "Ready":"True"
	I0408 23:53:18.098086    7680 pod_ready.go:82] duration metric: took 402.1585ms for pod "kube-scheduler-ha-061400-m02" in "kube-system" namespace to be "Ready" ...
	I0408 23:53:18.098086    7680 pod_ready.go:39] duration metric: took 3.2016031s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 23:53:18.098193    7680 api_server.go:52] waiting for apiserver process to appear ...
	I0408 23:53:18.110025    7680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 23:53:18.137634    7680 api_server.go:72] duration metric: took 24.1622444s to wait for apiserver process to appear ...
	I0408 23:53:18.137634    7680 api_server.go:88] waiting for apiserver healthz status ...
	I0408 23:53:18.137634    7680 api_server.go:253] Checking apiserver healthz at https://192.168.119.206:8443/healthz ...
	I0408 23:53:18.155108    7680 api_server.go:279] https://192.168.119.206:8443/healthz returned 200:
	ok
	I0408 23:53:18.155358    7680 round_trippers.go:470] GET https://192.168.119.206:8443/version
	I0408 23:53:18.155443    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:18.155443    7680 round_trippers.go:480]     Accept: application/json, */*
	I0408 23:53:18.155443    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:18.157185    7680 round_trippers.go:581] Response Status: 200 OK in 1 milliseconds
	I0408 23:53:18.157185    7680 api_server.go:141] control plane version: v1.32.2
	I0408 23:53:18.157185    7680 api_server.go:131] duration metric: took 19.5511ms to wait for apiserver health ...
	I0408 23:53:18.157185    7680 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 23:53:18.290358    7680 request.go:661] Waited for 133.1716ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods
	I0408 23:53:18.290358    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods
	I0408 23:53:18.290358    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:18.290358    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:18.290358    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:18.297373    7680 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0408 23:53:18.302165    7680 system_pods.go:59] 17 kube-system pods found
	I0408 23:53:18.302236    7680 system_pods.go:61] "coredns-668d6bf9bc-rzk8c" [18f6703f-34ad-403f-b86d-9a8f3dc927a0] Running
	I0408 23:53:18.302344    7680 system_pods.go:61] "coredns-668d6bf9bc-scvcr" [952efdd7-d201-4747-833a-59e05925e74f] Running
	I0408 23:53:18.302344    7680 system_pods.go:61] "etcd-ha-061400" [429dfaa4-c9bf-47dc-81f9-ab33ad3acee4] Running
	I0408 23:53:18.302344    7680 system_pods.go:61] "etcd-ha-061400-m02" [5fa6b2de-e3e8-4c95-84e3-3e344ce6a56f] Running
	I0408 23:53:18.302344    7680 system_pods.go:61] "kindnet-44mc6" [a8a857e1-90f1-4346-97a7-0b083352aeda] Running
	I0408 23:53:18.302344    7680 system_pods.go:61] "kindnet-7mvqz" [3fcc4494-1878-48e2-97ee-f76dcff55c29] Running
	I0408 23:53:18.302402    7680 system_pods.go:61] "kube-apiserver-ha-061400" [488f7097-53fd-4754-aa77-78aed24b3494] Running
	I0408 23:53:18.302402    7680 system_pods.go:61] "kube-apiserver-ha-061400-m02" [1f83551d-39c0-4485-b4a6-d44c3e58b435] Running
	I0408 23:53:18.302402    7680 system_pods.go:61] "kube-controller-manager-ha-061400" [28c1163e-e283-49b0-bab7-b91d1b73ab27] Running
	I0408 23:53:18.302444    7680 system_pods.go:61] "kube-controller-manager-ha-061400-m02" [89ab7c55-91a9-452b-9c0e-3673bf608abc] Running
	I0408 23:53:18.302444    7680 system_pods.go:61] "kube-proxy-lr9jb" [4ea29fd2-fb54-44d7-a558-a272fd4f05f5] Running
	I0408 23:53:18.302482    7680 system_pods.go:61] "kube-proxy-nkwqr" [20f509f0-ca9e-4464-b87f-e5d226ce9e3c] Running
	I0408 23:53:18.302482    7680 system_pods.go:61] "kube-scheduler-ha-061400" [b16bc563-a6aa-49d3-b7c4-74b5827bb66e] Running
	I0408 23:53:18.302482    7680 system_pods.go:61] "kube-scheduler-ha-061400-m02" [e9a386c4-fe99-49d0-bff9-d434ba81d735] Running
	I0408 23:53:18.302482    7680 system_pods.go:61] "kube-vip-ha-061400" [b677e4c1-39bf-459c-a33c-ecfce817e2a5] Running
	I0408 23:53:18.302482    7680 system_pods.go:61] "kube-vip-ha-061400-m02" [2a30dc1d-3208-468f-8614-a469337f5ac2] Running
	I0408 23:53:18.302482    7680 system_pods.go:61] "storage-provisioner" [bd11797d-cec8-419e-b7e9-1d537d9a7378] Running
	I0408 23:53:18.302482    7680 system_pods.go:74] duration metric: took 145.2951ms to wait for pod list to return data ...
	I0408 23:53:18.302579    7680 default_sa.go:34] waiting for default service account to be created ...
	I0408 23:53:18.489976    7680 request.go:661] Waited for 187.3685ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/default/serviceaccounts
	I0408 23:53:18.489976    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/default/serviceaccounts
	I0408 23:53:18.489976    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:18.489976    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:18.489976    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:18.495143    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:53:18.495468    7680 default_sa.go:45] found service account: "default"
	I0408 23:53:18.495468    7680 default_sa.go:55] duration metric: took 192.8863ms for default service account to be created ...
	I0408 23:53:18.495468    7680 system_pods.go:116] waiting for k8s-apps to be running ...
	I0408 23:53:18.690501    7680 request.go:661] Waited for 195.0304ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods
	I0408 23:53:18.690501    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods
	I0408 23:53:18.690501    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:18.690501    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:18.690501    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:18.696208    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:53:18.698979    7680 system_pods.go:86] 17 kube-system pods found
	I0408 23:53:18.699060    7680 system_pods.go:89] "coredns-668d6bf9bc-rzk8c" [18f6703f-34ad-403f-b86d-9a8f3dc927a0] Running
	I0408 23:53:18.699060    7680 system_pods.go:89] "coredns-668d6bf9bc-scvcr" [952efdd7-d201-4747-833a-59e05925e74f] Running
	I0408 23:53:18.699060    7680 system_pods.go:89] "etcd-ha-061400" [429dfaa4-c9bf-47dc-81f9-ab33ad3acee4] Running
	I0408 23:53:18.699060    7680 system_pods.go:89] "etcd-ha-061400-m02" [5fa6b2de-e3e8-4c95-84e3-3e344ce6a56f] Running
	I0408 23:53:18.699060    7680 system_pods.go:89] "kindnet-44mc6" [a8a857e1-90f1-4346-97a7-0b083352aeda] Running
	I0408 23:53:18.699060    7680 system_pods.go:89] "kindnet-7mvqz" [3fcc4494-1878-48e2-97ee-f76dcff55c29] Running
	I0408 23:53:18.699060    7680 system_pods.go:89] "kube-apiserver-ha-061400" [488f7097-53fd-4754-aa77-78aed24b3494] Running
	I0408 23:53:18.699060    7680 system_pods.go:89] "kube-apiserver-ha-061400-m02" [1f83551d-39c0-4485-b4a6-d44c3e58b435] Running
	I0408 23:53:18.699060    7680 system_pods.go:89] "kube-controller-manager-ha-061400" [28c1163e-e283-49b0-bab7-b91d1b73ab27] Running
	I0408 23:53:18.699060    7680 system_pods.go:89] "kube-controller-manager-ha-061400-m02" [89ab7c55-91a9-452b-9c0e-3673bf608abc] Running
	I0408 23:53:18.699060    7680 system_pods.go:89] "kube-proxy-lr9jb" [4ea29fd2-fb54-44d7-a558-a272fd4f05f5] Running
	I0408 23:53:18.699060    7680 system_pods.go:89] "kube-proxy-nkwqr" [20f509f0-ca9e-4464-b87f-e5d226ce9e3c] Running
	I0408 23:53:18.699129    7680 system_pods.go:89] "kube-scheduler-ha-061400" [b16bc563-a6aa-49d3-b7c4-74b5827bb66e] Running
	I0408 23:53:18.699234    7680 system_pods.go:89] "kube-scheduler-ha-061400-m02" [e9a386c4-fe99-49d0-bff9-d434ba81d735] Running
	I0408 23:53:18.699234    7680 system_pods.go:89] "kube-vip-ha-061400" [b677e4c1-39bf-459c-a33c-ecfce817e2a5] Running
	I0408 23:53:18.699347    7680 system_pods.go:89] "kube-vip-ha-061400-m02" [2a30dc1d-3208-468f-8614-a469337f5ac2] Running
	I0408 23:53:18.699347    7680 system_pods.go:89] "storage-provisioner" [bd11797d-cec8-419e-b7e9-1d537d9a7378] Running
	I0408 23:53:18.699347    7680 system_pods.go:126] duration metric: took 203.8759ms to wait for k8s-apps to be running ...
	I0408 23:53:18.699347    7680 system_svc.go:44] waiting for kubelet service to be running ....
	I0408 23:53:18.710357    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 23:53:18.736388    7680 system_svc.go:56] duration metric: took 37.0412ms WaitForService to wait for kubelet
	I0408 23:53:18.736388    7680 kubeadm.go:582] duration metric: took 24.7609912s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 23:53:18.737346    7680 node_conditions.go:102] verifying NodePressure condition ...
	I0408 23:53:18.889956    7680 request.go:661] Waited for 152.608ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes
	I0408 23:53:18.890540    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes
	I0408 23:53:18.890540    7680 round_trippers.go:476] Request Headers:
	I0408 23:53:18.890540    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:53:18.890540    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:53:18.896069    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:53:18.896715    7680 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 23:53:18.896715    7680 node_conditions.go:123] node cpu capacity is 2
	I0408 23:53:18.896715    7680 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 23:53:18.896715    7680 node_conditions.go:123] node cpu capacity is 2
	I0408 23:53:18.896715    7680 node_conditions.go:105] duration metric: took 159.3663ms to run NodePressure ...
	I0408 23:53:18.896715    7680 start.go:241] waiting for startup goroutines ...
	I0408 23:53:18.896715    7680 start.go:255] writing updated cluster config ...
	I0408 23:53:18.901866    7680 out.go:201] 
	I0408 23:53:18.920442    7680 config.go:182] Loaded profile config "ha-061400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0408 23:53:18.921474    7680 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\config.json ...
	I0408 23:53:18.930708    7680 out.go:177] * Starting "ha-061400-m03" control-plane node in "ha-061400" cluster
	I0408 23:53:18.933750    7680 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0408 23:53:18.933861    7680 cache.go:56] Caching tarball of preloaded images
	I0408 23:53:18.934001    7680 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0408 23:53:18.934001    7680 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0408 23:53:18.934575    7680 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\config.json ...
	I0408 23:53:18.941401    7680 start.go:360] acquireMachinesLock for ha-061400-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0408 23:53:18.941401    7680 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-061400-m03"
	I0408 23:53:18.942067    7680 start.go:93] Provisioning new machine with config: &{Name:ha-061400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName
:ha-061400 Namespace:default APIServerHAVIP:192.168.127.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.119.206 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.118.215 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 23:53:18.942067    7680 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0408 23:53:18.948508    7680 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0408 23:53:18.949126    7680 start.go:159] libmachine.API.Create for "ha-061400" (driver="hyperv")
	I0408 23:53:18.949126    7680 client.go:168] LocalClient.Create starting
	I0408 23:53:18.949457    7680 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0408 23:53:18.949861    7680 main.go:141] libmachine: Decoding PEM data...
	I0408 23:53:18.949861    7680 main.go:141] libmachine: Parsing certificate...
	I0408 23:53:18.950131    7680 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0408 23:53:18.950454    7680 main.go:141] libmachine: Decoding PEM data...
	I0408 23:53:18.950454    7680 main.go:141] libmachine: Parsing certificate...
	I0408 23:53:18.950454    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0408 23:53:20.857708    7680 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0408 23:53:20.857708    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:53:20.857708    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0408 23:53:22.612466    7680 main.go:141] libmachine: [stdout =====>] : False
	
	I0408 23:53:22.612626    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:53:22.612717    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0408 23:53:24.101556    7680 main.go:141] libmachine: [stdout =====>] : True
	
	I0408 23:53:24.102107    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:53:24.102107    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0408 23:53:27.957971    7680 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0408 23:53:27.957971    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:53:27.960021    7680 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0408 23:53:28.398547    7680 main.go:141] libmachine: Creating SSH key...
	I0408 23:53:29.364239    7680 main.go:141] libmachine: Creating VM...
	I0408 23:53:29.364239    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0408 23:53:32.359359    7680 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0408 23:53:32.360143    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:53:32.360143    7680 main.go:141] libmachine: Using switch "Default Switch"
	I0408 23:53:32.360143    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0408 23:53:34.118084    7680 main.go:141] libmachine: [stdout =====>] : True
	
	I0408 23:53:34.118769    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:53:34.118769    7680 main.go:141] libmachine: Creating VHD
	I0408 23:53:34.118769    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0408 23:53:37.963116    7680 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 2390A371-F1B2-4C2A-ABA8-80A853D65317
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0408 23:53:37.963116    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:53:37.963116    7680 main.go:141] libmachine: Writing magic tar header
	I0408 23:53:37.963116    7680 main.go:141] libmachine: Writing SSH key tar header
	I0408 23:53:37.976936    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0408 23:53:41.190377    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:53:41.190496    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:53:41.190571    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m03\disk.vhd' -SizeBytes 20000MB
	I0408 23:53:43.812870    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:53:43.812870    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:53:43.812978    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-061400-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0408 23:53:47.455397    7680 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-061400-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0408 23:53:47.455472    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:53:47.455564    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-061400-m03 -DynamicMemoryEnabled $false
	I0408 23:53:49.703951    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:53:49.704804    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:53:49.705081    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-061400-m03 -Count 2
	I0408 23:53:51.891541    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:53:51.891541    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:53:51.892250    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-061400-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m03\boot2docker.iso'
	I0408 23:53:54.502456    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:53:54.502456    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:53:54.503112    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-061400-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m03\disk.vhd'
	I0408 23:53:57.182140    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:53:57.182140    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:53:57.182140    7680 main.go:141] libmachine: Starting VM...
	I0408 23:53:57.182140    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-061400-m03
	I0408 23:54:00.372427    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:54:00.372427    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:00.372427    7680 main.go:141] libmachine: Waiting for host to start...
	I0408 23:54:00.373189    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:54:02.706214    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:54:02.706680    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:02.706680    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:54:05.363187    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:54:05.363187    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:06.363876    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:54:08.661355    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:54:08.661355    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:08.661637    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:54:11.302687    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:54:11.302749    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:12.303322    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:54:14.580207    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:54:14.581252    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:14.581306    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:54:17.163857    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:54:17.164205    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:18.165191    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:54:20.433715    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:54:20.433715    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:20.433715    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:54:23.041252    7680 main.go:141] libmachine: [stdout =====>] : 
	I0408 23:54:23.041252    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:24.042955    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:54:26.353871    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:54:26.353871    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:26.354770    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:54:29.050023    7680 main.go:141] libmachine: [stdout =====>] : 192.168.126.102
	
	I0408 23:54:29.050023    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:29.050023    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:54:31.334140    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:54:31.335159    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:31.335234    7680 machine.go:93] provisionDockerMachine start ...
	I0408 23:54:31.335438    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:54:33.659626    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:54:33.660050    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:33.660174    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:54:36.294561    7680 main.go:141] libmachine: [stdout =====>] : 192.168.126.102
	
	I0408 23:54:36.294561    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:36.303606    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:54:36.304607    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.126.102 22 <nil> <nil>}
	I0408 23:54:36.304607    7680 main.go:141] libmachine: About to run SSH command:
	hostname
	I0408 23:54:36.445191    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0408 23:54:36.445191    7680 buildroot.go:166] provisioning hostname "ha-061400-m03"
	I0408 23:54:36.445292    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:54:38.656366    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:54:38.656366    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:38.656366    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:54:41.255971    7680 main.go:141] libmachine: [stdout =====>] : 192.168.126.102
	
	I0408 23:54:41.257094    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:41.262802    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:54:41.263356    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.126.102 22 <nil> <nil>}
	I0408 23:54:41.263466    7680 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-061400-m03 && echo "ha-061400-m03" | sudo tee /etc/hostname
	I0408 23:54:41.431476    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-061400-m03
	
	I0408 23:54:41.431822    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:54:43.611833    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:54:43.612719    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:43.612719    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:54:46.214054    7680 main.go:141] libmachine: [stdout =====>] : 192.168.126.102
	
	I0408 23:54:46.214054    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:46.220367    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:54:46.220492    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.126.102 22 <nil> <nil>}
	I0408 23:54:46.220492    7680 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-061400-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-061400-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-061400-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0408 23:54:46.378969    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0408 23:54:46.378969    7680 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0408 23:54:46.378969    7680 buildroot.go:174] setting up certificates
	I0408 23:54:46.378969    7680 provision.go:84] configureAuth start
	I0408 23:54:46.378969    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:54:48.542192    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:54:48.542192    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:48.542192    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:54:51.142868    7680 main.go:141] libmachine: [stdout =====>] : 192.168.126.102
	
	I0408 23:54:51.143675    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:51.143675    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:54:53.312251    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:54:53.313160    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:53.313160    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:54:55.862684    7680 main.go:141] libmachine: [stdout =====>] : 192.168.126.102
	
	I0408 23:54:55.862684    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:55.862684    7680 provision.go:143] copyHostCerts
	I0408 23:54:55.863595    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0408 23:54:55.863886    7680 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0408 23:54:55.863886    7680 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0408 23:54:55.864537    7680 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0408 23:54:55.865760    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0408 23:54:55.866066    7680 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0408 23:54:55.866066    7680 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0408 23:54:55.866066    7680 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0408 23:54:55.866912    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0408 23:54:55.867613    7680 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0408 23:54:55.867613    7680 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0408 23:54:55.867613    7680 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0408 23:54:55.869055    7680 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-061400-m03 san=[127.0.0.1 192.168.126.102 ha-061400-m03 localhost minikube]
	I0408 23:54:55.899472    7680 provision.go:177] copyRemoteCerts
	I0408 23:54:55.909473    7680 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0408 23:54:55.909473    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:54:58.076811    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:54:58.077010    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:54:58.077097    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:55:00.656887    7680 main.go:141] libmachine: [stdout =====>] : 192.168.126.102
	
	I0408 23:55:00.657142    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:00.657142    7680 sshutil.go:53] new ssh client: &{IP:192.168.126.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m03\id_rsa Username:docker}
	I0408 23:55:00.768275    7680 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8587372s)
	I0408 23:55:00.768275    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0408 23:55:00.768969    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0408 23:55:00.817148    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0408 23:55:00.817148    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0408 23:55:00.862216    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0408 23:55:00.862674    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0408 23:55:00.904960    7680 provision.go:87] duration metric: took 14.5257976s to configureAuth
	I0408 23:55:00.904960    7680 buildroot.go:189] setting minikube options for container-runtime
	I0408 23:55:00.906022    7680 config.go:182] Loaded profile config "ha-061400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0408 23:55:00.906248    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:55:03.068956    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:55:03.069792    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:03.069792    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:55:05.612828    7680 main.go:141] libmachine: [stdout =====>] : 192.168.126.102
	
	I0408 23:55:05.612828    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:05.618022    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:55:05.618746    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.126.102 22 <nil> <nil>}
	I0408 23:55:05.618746    7680 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0408 23:55:05.757172    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0408 23:55:05.757172    7680 buildroot.go:70] root file system type: tmpfs
	I0408 23:55:05.757172    7680 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0408 23:55:05.757172    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:55:07.927578    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:55:07.927578    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:07.927707    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:55:10.471367    7680 main.go:141] libmachine: [stdout =====>] : 192.168.126.102
	
	I0408 23:55:10.471367    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:10.478271    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:55:10.478824    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.126.102 22 <nil> <nil>}
	I0408 23:55:10.479017    7680 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.119.206"
	Environment="NO_PROXY=192.168.119.206,192.168.118.215"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0408 23:55:10.642222    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.119.206
	Environment=NO_PROXY=192.168.119.206,192.168.118.215
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0408 23:55:10.642222    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:55:12.846397    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:55:12.846397    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:12.847411    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:55:15.438884    7680 main.go:141] libmachine: [stdout =====>] : 192.168.126.102
	
	I0408 23:55:15.438884    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:15.445640    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:55:15.445791    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.126.102 22 <nil> <nil>}
	I0408 23:55:15.445791    7680 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0408 23:55:17.720100    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0408 23:55:17.720199    7680 machine.go:96] duration metric: took 46.3843472s to provisionDockerMachine
	I0408 23:55:17.720199    7680 client.go:171] duration metric: took 1m58.7694951s to LocalClient.Create
	I0408 23:55:17.720257    7680 start.go:167] duration metric: took 1m58.7695535s to libmachine.API.Create "ha-061400"
	I0408 23:55:17.720313    7680 start.go:293] postStartSetup for "ha-061400-m03" (driver="hyperv")
	I0408 23:55:17.720313    7680 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0408 23:55:17.730571    7680 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0408 23:55:17.730571    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:55:19.880328    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:55:19.881242    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:19.881242    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:55:22.604832    7680 main.go:141] libmachine: [stdout =====>] : 192.168.126.102
	
	I0408 23:55:22.604904    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:22.605601    7680 sshutil.go:53] new ssh client: &{IP:192.168.126.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m03\id_rsa Username:docker}
	I0408 23:55:22.727109    7680 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9964714s)
	I0408 23:55:22.746521    7680 ssh_runner.go:195] Run: cat /etc/os-release
	I0408 23:55:22.754090    7680 info.go:137] Remote host: Buildroot 2023.02.9
	I0408 23:55:22.754090    7680 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0408 23:55:22.754848    7680 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0408 23:55:22.755963    7680 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> 98642.pem in /etc/ssl/certs
	I0408 23:55:22.756036    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> /etc/ssl/certs/98642.pem
	I0408 23:55:22.767362    7680 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0408 23:55:22.787798    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem --> /etc/ssl/certs/98642.pem (1708 bytes)
	I0408 23:55:22.850437    7680 start.go:296] duration metric: took 5.1300551s for postStartSetup
	I0408 23:55:22.854121    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:55:25.002316    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:55:25.003357    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:25.003357    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:55:27.536875    7680 main.go:141] libmachine: [stdout =====>] : 192.168.126.102
	
	I0408 23:55:27.536875    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:27.538029    7680 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\config.json ...
	I0408 23:55:27.542072    7680 start.go:128] duration metric: took 2m8.5982678s to createHost
	I0408 23:55:27.542072    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:55:29.743128    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:55:29.743231    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:29.743308    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:55:32.295043    7680 main.go:141] libmachine: [stdout =====>] : 192.168.126.102
	
	I0408 23:55:32.295043    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:32.300948    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:55:32.301576    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.126.102 22 <nil> <nil>}
	I0408 23:55:32.301576    7680 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0408 23:55:32.437433    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744156532.464345148
	
	I0408 23:55:32.437433    7680 fix.go:216] guest clock: 1744156532.464345148
	I0408 23:55:32.437433    7680 fix.go:229] Guest: 2025-04-08 23:55:32.464345148 +0000 UTC Remote: 2025-04-08 23:55:27.5420727 +0000 UTC m=+561.904383401 (delta=4.922272448s)
	I0408 23:55:32.437433    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:55:34.590047    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:55:34.590047    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:34.590626    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:55:37.108963    7680 main.go:141] libmachine: [stdout =====>] : 192.168.126.102
	
	I0408 23:55:37.108963    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:37.115315    7680 main.go:141] libmachine: Using SSH client type: native
	I0408 23:55:37.116105    7680 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.126.102 22 <nil> <nil>}
	I0408 23:55:37.116105    7680 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1744156532
	I0408 23:55:37.256890    7680 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Apr  8 23:55:32 UTC 2025
	
	I0408 23:55:37.256890    7680 fix.go:236] clock set: Tue Apr  8 23:55:32 UTC 2025
	 (err=<nil>)
	I0408 23:55:37.256890    7680 start.go:83] releasing machines lock for "ha-061400-m03", held for 2m18.3136521s
	I0408 23:55:37.257430    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:55:39.487177    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:55:39.487177    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:39.488139    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:55:42.071794    7680 main.go:141] libmachine: [stdout =====>] : 192.168.126.102
	
	I0408 23:55:42.071794    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:42.076775    7680 out.go:177] * Found network options:
	I0408 23:55:42.079895    7680 out.go:177]   - NO_PROXY=192.168.119.206,192.168.118.215
	W0408 23:55:42.082701    7680 proxy.go:119] fail to check proxy env: Error ip not in block
	W0408 23:55:42.082914    7680 proxy.go:119] fail to check proxy env: Error ip not in block
	I0408 23:55:42.085605    7680 out.go:177]   - NO_PROXY=192.168.119.206,192.168.118.215
	W0408 23:55:42.087693    7680 proxy.go:119] fail to check proxy env: Error ip not in block
	W0408 23:55:42.087693    7680 proxy.go:119] fail to check proxy env: Error ip not in block
	W0408 23:55:42.088664    7680 proxy.go:119] fail to check proxy env: Error ip not in block
	W0408 23:55:42.088664    7680 proxy.go:119] fail to check proxy env: Error ip not in block
	I0408 23:55:42.091656    7680 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0408 23:55:42.091656    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:55:42.102600    7680 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0408 23:55:42.102600    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400-m03 ).state
	I0408 23:55:44.362207    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:55:44.362562    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:44.362640    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:55:44.365734    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:55:44.365734    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:44.365734    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400-m03 ).networkadapters[0]).ipaddresses[0]
	I0408 23:55:47.110534    7680 main.go:141] libmachine: [stdout =====>] : 192.168.126.102
	
	I0408 23:55:47.110534    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:47.110941    7680 sshutil.go:53] new ssh client: &{IP:192.168.126.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m03\id_rsa Username:docker}
	I0408 23:55:47.141073    7680 main.go:141] libmachine: [stdout =====>] : 192.168.126.102
	
	I0408 23:55:47.141073    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:47.142607    7680 sshutil.go:53] new ssh client: &{IP:192.168.126.102 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400-m03\id_rsa Username:docker}
	I0408 23:55:47.216473    7680 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1138054s)
	W0408 23:55:47.216534    7680 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0408 23:55:47.227597    7680 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0408 23:55:47.232526    7680 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.1408029s)
	W0408 23:55:47.232526    7680 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0408 23:55:47.258229    7680 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0408 23:55:47.258229    7680 start.go:495] detecting cgroup driver to use...
	I0408 23:55:47.258229    7680 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 23:55:47.305727    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	W0408 23:55:47.334063    7680 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0408 23:55:47.334063    7680 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0408 23:55:47.336031    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0408 23:55:47.354572    7680 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0408 23:55:47.365644    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0408 23:55:47.396104    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 23:55:47.432393    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0408 23:55:47.463773    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0408 23:55:47.496006    7680 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0408 23:55:47.530023    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0408 23:55:47.561127    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0408 23:55:47.592982    7680 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0408 23:55:47.624422    7680 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0408 23:55:47.641534    7680 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0408 23:55:47.653155    7680 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0408 23:55:47.687820    7680 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0408 23:55:47.716717    7680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:55:47.903826    7680 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0408 23:55:47.933797    7680 start.go:495] detecting cgroup driver to use...
	I0408 23:55:47.944920    7680 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0408 23:55:47.979092    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 23:55:48.012782    7680 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0408 23:55:48.081536    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0408 23:55:48.118434    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0408 23:55:48.154411    7680 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0408 23:55:48.213702    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0408 23:55:48.238885    7680 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0408 23:55:48.283932    7680 ssh_runner.go:195] Run: which cri-dockerd
	I0408 23:55:48.301286    7680 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0408 23:55:48.317818    7680 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0408 23:55:48.361362    7680 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0408 23:55:48.557119    7680 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0408 23:55:48.733090    7680 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0408 23:55:48.733243    7680 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0408 23:55:48.780249    7680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:55:48.975751    7680 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0408 23:55:51.658168    7680 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6823806s)
	I0408 23:55:51.670914    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0408 23:55:51.708703    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0408 23:55:51.746226    7680 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0408 23:55:51.949698    7680 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0408 23:55:52.162175    7680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:55:52.356729    7680 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0408 23:55:52.399883    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0408 23:55:52.431318    7680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:55:52.626035    7680 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0408 23:55:52.734689    7680 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0408 23:55:52.748922    7680 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0408 23:55:52.758320    7680 start.go:563] Will wait 60s for crictl version
	I0408 23:55:52.769576    7680 ssh_runner.go:195] Run: which crictl
	I0408 23:55:52.787650    7680 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0408 23:55:52.844297    7680 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0408 23:55:52.854084    7680 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0408 23:55:52.902685    7680 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0408 23:55:52.940791    7680 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 27.4.0 ...
	I0408 23:55:52.943350    7680 out.go:177]   - env NO_PROXY=192.168.119.206
	I0408 23:55:52.946414    7680 out.go:177]   - env NO_PROXY=192.168.119.206,192.168.118.215
	I0408 23:55:52.949183    7680 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0408 23:55:52.953487    7680 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0408 23:55:52.953487    7680 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0408 23:55:52.953487    7680 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0408 23:55:52.953487    7680 ip.go:211] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:f4:da:75 Flags:up|broadcast|multicast|running}
	I0408 23:55:52.956870    7680 ip.go:214] interface addr: fe80::e8ab:9cc6:22b1:a5fc/64
	I0408 23:55:52.956870    7680 ip.go:214] interface addr: 192.168.112.1/20
	I0408 23:55:52.968035    7680 ssh_runner.go:195] Run: grep 192.168.112.1	host.minikube.internal$ /etc/hosts
	I0408 23:55:52.974846    7680 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.112.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 23:55:52.995720    7680 mustload.go:65] Loading cluster: ha-061400
	I0408 23:55:52.996445    7680 config.go:182] Loaded profile config "ha-061400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0408 23:55:52.996668    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:55:55.144479    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:55:55.144479    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:55.144479    7680 host.go:66] Checking if "ha-061400" exists ...
	I0408 23:55:55.144479    7680 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400 for IP: 192.168.126.102
	I0408 23:55:55.145440    7680 certs.go:194] generating shared ca certs ...
	I0408 23:55:55.145440    7680 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 23:55:55.145440    7680 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0408 23:55:55.145440    7680 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0408 23:55:55.146538    7680 certs.go:256] generating profile certs ...
	I0408 23:55:55.147667    7680 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\client.key
	I0408 23:55:55.147921    7680 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key.5d3ae75b
	I0408 23:55:55.148219    7680 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt.5d3ae75b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.119.206 192.168.118.215 192.168.126.102 192.168.127.254]
	I0408 23:55:55.661647    7680 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt.5d3ae75b ...
	I0408 23:55:55.661647    7680 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt.5d3ae75b: {Name:mka386ad3947e2e59ff49f1e94e7e8f217b7b995 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 23:55:55.663131    7680 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key.5d3ae75b ...
	I0408 23:55:55.663131    7680 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key.5d3ae75b: {Name:mk03f50f4c4bde286901c1be8ad3f0de4616726e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0408 23:55:55.664822    7680 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt.5d3ae75b -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt
	I0408 23:55:55.682546    7680 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key.5d3ae75b -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key
	I0408 23:55:55.685339    7680 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\proxy-client.key
	I0408 23:55:55.685339    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0408 23:55:55.685661    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0408 23:55:55.685834    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0408 23:55:55.686131    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0408 23:55:55.686364    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0408 23:55:55.686620    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0408 23:55:55.687181    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0408 23:55:55.687455    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0408 23:55:55.688255    7680 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9864.pem (1338 bytes)
	W0408 23:55:55.688660    7680 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9864_empty.pem, impossibly tiny 0 bytes
	I0408 23:55:55.688860    7680 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0408 23:55:55.689192    7680 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0408 23:55:55.689547    7680 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0408 23:55:55.690134    7680 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0408 23:55:55.690777    7680 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem (1708 bytes)
	I0408 23:55:55.690777    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> /usr/share/ca-certificates/98642.pem
	I0408 23:55:55.691536    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0408 23:55:55.691536    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9864.pem -> /usr/share/ca-certificates/9864.pem
	I0408 23:55:55.691536    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:55:57.814606    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:55:57.815654    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:55:57.815654    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:56:00.379511    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:56:00.380451    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:56:00.380931    7680 sshutil.go:53] new ssh client: &{IP:192.168.119.206 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400\id_rsa Username:docker}
	I0408 23:56:00.487255    7680 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0408 23:56:00.494969    7680 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0408 23:56:00.529444    7680 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0408 23:56:00.536758    7680 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0408 23:56:00.575701    7680 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0408 23:56:00.582344    7680 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0408 23:56:00.612946    7680 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0408 23:56:00.619591    7680 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0408 23:56:00.651838    7680 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0408 23:56:00.658771    7680 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0408 23:56:00.692967    7680 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0408 23:56:00.699870    7680 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0408 23:56:00.721084    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0408 23:56:00.773604    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0408 23:56:00.819439    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0408 23:56:00.863566    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0408 23:56:00.906332    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0408 23:56:00.951917    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0408 23:56:01.005278    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0408 23:56:01.051041    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-061400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0408 23:56:01.098835    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem --> /usr/share/ca-certificates/98642.pem (1708 bytes)
	I0408 23:56:01.157306    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0408 23:56:01.207222    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9864.pem --> /usr/share/ca-certificates/9864.pem (1338 bytes)
	I0408 23:56:01.256322    7680 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0408 23:56:01.289971    7680 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0408 23:56:01.319804    7680 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0408 23:56:01.349189    7680 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0408 23:56:01.378385    7680 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0408 23:56:01.410434    7680 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0408 23:56:01.439719    7680 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0408 23:56:01.483009    7680 ssh_runner.go:195] Run: openssl version
	I0408 23:56:01.502441    7680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9864.pem && ln -fs /usr/share/ca-certificates/9864.pem /etc/ssl/certs/9864.pem"
	I0408 23:56:01.534823    7680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9864.pem
	I0408 23:56:01.541068    7680 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 23:04 /usr/share/ca-certificates/9864.pem
	I0408 23:56:01.553388    7680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9864.pem
	I0408 23:56:01.572304    7680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9864.pem /etc/ssl/certs/51391683.0"
	I0408 23:56:01.601306    7680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98642.pem && ln -fs /usr/share/ca-certificates/98642.pem /etc/ssl/certs/98642.pem"
	I0408 23:56:01.630916    7680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98642.pem
	I0408 23:56:01.637024    7680 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 23:04 /usr/share/ca-certificates/98642.pem
	I0408 23:56:01.648525    7680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98642.pem
	I0408 23:56:01.668411    7680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/98642.pem /etc/ssl/certs/3ec20f2e.0"
	I0408 23:56:01.700417    7680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0408 23:56:01.732921    7680 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0408 23:56:01.739927    7680 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0408 23:56:01.751129    7680 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0408 23:56:01.770935    7680 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0408 23:56:01.801979    7680 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0408 23:56:01.808914    7680 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0408 23:56:01.809301    7680 kubeadm.go:934] updating node {m03 192.168.126.102 8443 v1.32.2 docker true true} ...
	I0408 23:56:01.809540    7680 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-061400-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.126.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-061400 Namespace:default APIServerHAVIP:192.168.127.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0408 23:56:01.809540    7680 kube-vip.go:115] generating kube-vip config ...
	I0408 23:56:01.821082    7680 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0408 23:56:01.847795    7680 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0408 23:56:01.847795    7680 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.127.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0408 23:56:01.861988    7680 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0408 23:56:01.878319    7680 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
	
	Initiating transfer...
	I0408 23:56:01.889582    7680 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
	I0408 23:56:01.909280    7680 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
	I0408 23:56:01.909280    7680 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256
	I0408 23:56:01.909280    7680 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256
	I0408 23:56:01.909280    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubectl -> /var/lib/minikube/binaries/v1.32.2/kubectl
	I0408 23:56:01.909280    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubeadm -> /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0408 23:56:01.924390    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 23:56:01.925019    7680 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl
	I0408 23:56:01.925019    7680 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0408 23:56:01.948111    7680 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubectl': No such file or directory
	I0408 23:56:01.948111    7680 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubelet -> /var/lib/minikube/binaries/v1.32.2/kubelet
	I0408 23:56:01.949027    7680 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubeadm': No such file or directory
	I0408 23:56:01.949237    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubectl --> /var/lib/minikube/binaries/v1.32.2/kubectl (57323672 bytes)
	I0408 23:56:01.949495    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubeadm --> /var/lib/minikube/binaries/v1.32.2/kubeadm (70942872 bytes)
	I0408 23:56:01.964638    7680 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet
	I0408 23:56:02.030584    7680 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubelet': No such file or directory
	I0408 23:56:02.030862    7680 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubelet --> /var/lib/minikube/binaries/v1.32.2/kubelet (77406468 bytes)
	I0408 23:56:03.257136    7680 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0408 23:56:03.277569    7680 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0408 23:56:03.318624    7680 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0408 23:56:03.351237    7680 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1443 bytes)
	I0408 23:56:03.393478    7680 ssh_runner.go:195] Run: grep 192.168.127.254	control-plane.minikube.internal$ /etc/hosts
	I0408 23:56:03.400235    7680 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.127.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0408 23:56:03.436854    7680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:56:03.659865    7680 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 23:56:03.695629    7680 host.go:66] Checking if "ha-061400" exists ...
	I0408 23:56:03.696618    7680 start.go:317] joinCluster: &{Name:ha-061400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-061400 Namespace:def
ault APIServerHAVIP:192.168.127.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.119.206 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.118.215 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.126.102 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspek
tor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 23:56:03.696873    7680 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0408 23:56:03.696943    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-061400 ).state
	I0408 23:56:05.853126    7680 main.go:141] libmachine: [stdout =====>] : Running
	
	I0408 23:56:05.853126    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:56:05.854146    7680 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-061400 ).networkadapters[0]).ipaddresses[0]
	I0408 23:56:08.453032    7680 main.go:141] libmachine: [stdout =====>] : 192.168.119.206
	
	I0408 23:56:08.453032    7680 main.go:141] libmachine: [stderr =====>] : 
	I0408 23:56:08.454153    7680 sshutil.go:53] new ssh client: &{IP:192.168.119.206 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-061400\id_rsa Username:docker}
	I0408 23:56:08.667548    7680 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm token create --print-join-command --ttl=0": (4.9706091s)
	I0408 23:56:08.667690    7680 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.126.102 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 23:56:08.667690    7680 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wlyb9l.uobras4z9tmnx4in --discovery-token-ca-cert-hash sha256:aa5a4dda055a1a4ae6c54f5bc7c6626b2903d2da5858116de66a68e5e1fbf334 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-061400-m03 --control-plane --apiserver-advertise-address=192.168.126.102 --apiserver-bind-port=8443"
	I0408 23:56:52.219978    7680 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wlyb9l.uobras4z9tmnx4in --discovery-token-ca-cert-hash sha256:aa5a4dda055a1a4ae6c54f5bc7c6626b2903d2da5858116de66a68e5e1fbf334 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-061400-m03 --control-plane --apiserver-advertise-address=192.168.126.102 --apiserver-bind-port=8443": (43.5517131s)
	I0408 23:56:52.219978    7680 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0408 23:56:53.086433    7680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-061400-m03 minikube.k8s.io/updated_at=2025_04_08T23_56_53_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=fd2f4c3eba2bd452b5997c855e28d0966165ba83 minikube.k8s.io/name=ha-061400 minikube.k8s.io/primary=false
	I0408 23:56:53.263171    7680 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-061400-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0408 23:56:53.448007    7680 start.go:319] duration metric: took 49.7507324s to joinCluster
	I0408 23:56:53.448007    7680 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.126.102 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0408 23:56:53.448998    7680 config.go:182] Loaded profile config "ha-061400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0408 23:56:53.456984    7680 out.go:177] * Verifying Kubernetes components...
	I0408 23:56:53.476991    7680 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0408 23:56:53.883922    7680 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0408 23:56:53.917566    7680 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0408 23:56:53.918376    7680 kapi.go:59] client config for ha-061400: &rest.Config{Host:"https://192.168.127.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-061400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-061400\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2809400), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0408 23:56:53.918536    7680 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.127.254:8443 with https://192.168.119.206:8443
	I0408 23:56:53.919410    7680 node_ready.go:35] waiting up to 6m0s for node "ha-061400-m03" to be "Ready" ...
	I0408 23:56:53.919410    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:56:53.919410    7680 round_trippers.go:476] Request Headers:
	I0408 23:56:53.919410    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:56:53.919410    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:56:53.937245    7680 round_trippers.go:581] Response Status: 200 OK in 17 milliseconds
	I0408 23:56:54.419596    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:56:54.419596    7680 round_trippers.go:476] Request Headers:
	I0408 23:56:54.419596    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:56:54.419596    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:56:54.425201    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:56:54.920110    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:56:54.920110    7680 round_trippers.go:476] Request Headers:
	I0408 23:56:54.920110    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:56:54.920110    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:56:54.927198    7680 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0408 23:56:55.419986    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:56:55.419986    7680 round_trippers.go:476] Request Headers:
	I0408 23:56:55.419986    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:56:55.419986    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:56:55.425244    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:56:55.920739    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:56:55.920739    7680 round_trippers.go:476] Request Headers:
	I0408 23:56:55.920739    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:56:55.920739    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:56:55.937063    7680 round_trippers.go:581] Response Status: 200 OK in 16 milliseconds
	I0408 23:56:55.938072    7680 node_ready.go:53] node "ha-061400-m03" has status "Ready":"False"
	I0408 23:56:56.421011    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:56:56.421011    7680 round_trippers.go:476] Request Headers:
	I0408 23:56:56.421011    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:56:56.421011    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:56:56.425602    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:56:56.921264    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:56:56.921264    7680 round_trippers.go:476] Request Headers:
	I0408 23:56:56.921264    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:56:56.921264    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:56:56.927331    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:56:57.420328    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:56:57.420379    7680 round_trippers.go:476] Request Headers:
	I0408 23:56:57.420417    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:56:57.420417    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:56:57.425902    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:56:57.920502    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:56:57.920562    7680 round_trippers.go:476] Request Headers:
	I0408 23:56:57.920643    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:56:57.920643    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:56:57.930022    7680 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0408 23:56:58.419817    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:56:58.420340    7680 round_trippers.go:476] Request Headers:
	I0408 23:56:58.420413    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:56:58.420413    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:56:58.426121    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:56:58.426121    7680 node_ready.go:53] node "ha-061400-m03" has status "Ready":"False"
	I0408 23:56:58.920862    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:56:58.920936    7680 round_trippers.go:476] Request Headers:
	I0408 23:56:58.920936    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:56:58.920936    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:56:58.926341    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:56:59.420438    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:56:59.420438    7680 round_trippers.go:476] Request Headers:
	I0408 23:56:59.420438    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:56:59.420438    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:56:59.427084    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:56:59.920571    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:56:59.920571    7680 round_trippers.go:476] Request Headers:
	I0408 23:56:59.920571    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:56:59.920571    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:56:59.926702    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:00.420097    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:00.420097    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:00.420097    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:00.420097    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:00.572568    7680 round_trippers.go:581] Response Status: 200 OK in 152 milliseconds
	I0408 23:57:00.573125    7680 node_ready.go:53] node "ha-061400-m03" has status "Ready":"False"
	I0408 23:57:00.920754    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:00.920754    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:00.920754    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:00.920754    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:00.926672    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:01.420249    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:01.420386    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:01.420386    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:01.420386    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:01.425783    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:01.920600    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:01.920731    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:01.920731    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:01.920731    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:01.926710    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:02.420502    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:02.420502    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:02.420615    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:02.420615    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:02.427303    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:57:02.920160    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:02.920160    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:02.920160    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:02.920160    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:02.926713    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:57:02.927250    7680 node_ready.go:53] node "ha-061400-m03" has status "Ready":"False"
	I0408 23:57:03.420542    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:03.420542    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:03.420542    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:03.420542    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:03.434502    7680 round_trippers.go:581] Response Status: 200 OK in 13 milliseconds
	I0408 23:57:03.920777    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:03.920882    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:03.920882    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:03.920882    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:03.925445    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:57:04.420677    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:04.420677    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:04.420677    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:04.420677    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:04.425711    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:04.920241    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:04.920295    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:04.920295    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:04.920295    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:04.927645    7680 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0408 23:57:04.928186    7680 node_ready.go:53] node "ha-061400-m03" has status "Ready":"False"
	I0408 23:57:05.420047    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:05.420047    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:05.420047    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:05.420047    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:05.425500    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:05.920775    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:05.920775    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:05.920775    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:05.920775    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:05.925877    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:06.420598    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:06.420598    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:06.420598    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:06.420707    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:06.431386    7680 round_trippers.go:581] Response Status: 200 OK in 10 milliseconds
	I0408 23:57:06.920061    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:06.920547    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:06.920547    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:06.920547    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:06.929925    7680 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0408 23:57:06.930323    7680 node_ready.go:53] node "ha-061400-m03" has status "Ready":"False"
	I0408 23:57:07.419917    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:07.419917    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:07.419917    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:07.419917    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:07.425727    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:07.921141    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:07.921141    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:07.921141    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:07.921141    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:07.925877    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:57:08.419722    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:08.419722    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:08.419722    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:08.419722    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:08.424248    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:57:08.919832    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:08.919832    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:08.919832    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:08.919832    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:08.925015    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:09.419913    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:09.419913    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:09.419913    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:09.419913    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:09.425452    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:09.425452    7680 node_ready.go:53] node "ha-061400-m03" has status "Ready":"False"
	I0408 23:57:09.920170    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:09.920239    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:09.920239    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:09.920239    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:09.926353    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:57:10.420567    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:10.420567    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:10.420567    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:10.420567    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:10.425909    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:10.919865    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:10.919865    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:10.919865    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:10.919865    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:10.926214    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:57:11.420519    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:11.420519    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:11.420519    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:11.420519    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:11.425254    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:57:11.425789    7680 node_ready.go:53] node "ha-061400-m03" has status "Ready":"False"
	I0408 23:57:11.920085    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:11.920085    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:11.920085    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:11.920085    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:11.927917    7680 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0408 23:57:12.421250    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:12.421327    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:12.421327    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:12.421327    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:12.426507    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:12.920584    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:12.920584    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:12.920584    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:12.920584    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:12.926532    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:12.927469    7680 node_ready.go:49] node "ha-061400-m03" has status "Ready":"True"
	I0408 23:57:12.927607    7680 node_ready.go:38] duration metric: took 19.0079464s for node "ha-061400-m03" to be "Ready" ...
	I0408 23:57:12.927607    7680 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 23:57:12.927737    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods
	I0408 23:57:12.927737    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:12.927809    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:12.927809    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:12.932914    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:12.937232    7680 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-rzk8c" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:12.937343    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-rzk8c
	I0408 23:57:12.937399    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:12.937399    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:12.937482    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:12.941345    7680 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0408 23:57:12.942405    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:57:12.942555    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:12.942555    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:12.942555    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:12.955780    7680 round_trippers.go:581] Response Status: 200 OK in 13 milliseconds
	I0408 23:57:12.956813    7680 pod_ready.go:93] pod "coredns-668d6bf9bc-rzk8c" in "kube-system" namespace has status "Ready":"True"
	I0408 23:57:12.957193    7680 pod_ready.go:82] duration metric: took 19.9613ms for pod "coredns-668d6bf9bc-rzk8c" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:12.957276    7680 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-scvcr" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:12.957459    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-scvcr
	I0408 23:57:12.957459    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:12.957511    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:12.957511    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:12.961384    7680 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0408 23:57:12.961807    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:57:12.961807    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:12.961887    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:12.961887    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:12.965407    7680 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0408 23:57:12.965865    7680 pod_ready.go:93] pod "coredns-668d6bf9bc-scvcr" in "kube-system" namespace has status "Ready":"True"
	I0408 23:57:12.965865    7680 pod_ready.go:82] duration metric: took 8.5892ms for pod "coredns-668d6bf9bc-scvcr" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:12.965921    7680 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-061400" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:12.966110    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/etcd-ha-061400
	I0408 23:57:12.966170    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:12.966170    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:12.966225    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:12.970445    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:57:12.971524    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:57:12.971594    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:12.971594    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:12.971634    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:12.976502    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:57:12.976502    7680 pod_ready.go:93] pod "etcd-ha-061400" in "kube-system" namespace has status "Ready":"True"
	I0408 23:57:12.976502    7680 pod_ready.go:82] duration metric: took 10.5815ms for pod "etcd-ha-061400" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:12.976502    7680 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-061400-m02" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:12.976502    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/etcd-ha-061400-m02
	I0408 23:57:12.977744    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:12.977792    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:12.977792    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:12.981476    7680 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0408 23:57:12.982410    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:57:12.982410    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:12.982410    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:12.982479    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:12.991574    7680 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0408 23:57:12.992269    7680 pod_ready.go:93] pod "etcd-ha-061400-m02" in "kube-system" namespace has status "Ready":"True"
	I0408 23:57:12.992316    7680 pod_ready.go:82] duration metric: took 15.8137ms for pod "etcd-ha-061400-m02" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:12.992316    7680 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-061400-m03" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:13.120747    7680 request.go:661] Waited for 128.3664ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/etcd-ha-061400-m03
	I0408 23:57:13.120747    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/etcd-ha-061400-m03
	I0408 23:57:13.120747    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:13.120747    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:13.120747    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:13.126457    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:13.320888    7680 request.go:661] Waited for 193.668ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:13.320888    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:13.320888    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:13.320888    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:13.320888    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:13.326530    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:13.326631    7680 pod_ready.go:93] pod "etcd-ha-061400-m03" in "kube-system" namespace has status "Ready":"True"
	I0408 23:57:13.326631    7680 pod_ready.go:82] duration metric: took 334.31ms for pod "etcd-ha-061400-m03" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:13.327189    7680 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-061400" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:13.520791    7680 request.go:661] Waited for 193.4978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-061400
	I0408 23:57:13.520791    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-061400
	I0408 23:57:13.520791    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:13.520791    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:13.520791    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:13.526782    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:13.720583    7680 request.go:661] Waited for 192.7717ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:57:13.721076    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:57:13.721076    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:13.721076    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:13.721076    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:13.727209    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:57:13.727209    7680 pod_ready.go:93] pod "kube-apiserver-ha-061400" in "kube-system" namespace has status "Ready":"True"
	I0408 23:57:13.727209    7680 pod_ready.go:82] duration metric: took 400.0147ms for pod "kube-apiserver-ha-061400" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:13.727209    7680 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-061400-m02" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:13.920992    7680 request.go:661] Waited for 193.7812ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-061400-m02
	I0408 23:57:13.920992    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-061400-m02
	I0408 23:57:13.920992    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:13.920992    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:13.920992    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:13.926693    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:14.121239    7680 request.go:661] Waited for 193.8932ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:57:14.121239    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:57:14.121239    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:14.121239    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:14.121239    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:14.127562    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:57:14.128204    7680 pod_ready.go:93] pod "kube-apiserver-ha-061400-m02" in "kube-system" namespace has status "Ready":"True"
	I0408 23:57:14.128204    7680 pod_ready.go:82] duration metric: took 400.9899ms for pod "kube-apiserver-ha-061400-m02" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:14.128204    7680 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-061400-m03" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:14.320362    7680 request.go:661] Waited for 191.947ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-061400-m03
	I0408 23:57:14.320362    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-061400-m03
	I0408 23:57:14.320967    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:14.320967    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:14.320967    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:14.326866    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:14.521010    7680 request.go:661] Waited for 193.5706ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:14.521010    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:14.521010    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:14.521010    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:14.521010    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:14.526489    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:14.526555    7680 pod_ready.go:93] pod "kube-apiserver-ha-061400-m03" in "kube-system" namespace has status "Ready":"True"
	I0408 23:57:14.526555    7680 pod_ready.go:82] duration metric: took 398.346ms for pod "kube-apiserver-ha-061400-m03" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:14.526555    7680 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-061400" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:14.720922    7680 request.go:661] Waited for 194.3639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-061400
	I0408 23:57:14.720922    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-061400
	I0408 23:57:14.720922    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:14.720922    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:14.720922    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:14.727055    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:57:14.920322    7680 request.go:661] Waited for 193.2637ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:57:14.920613    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:57:14.920613    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:14.920613    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:14.920613    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:14.926054    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:14.926303    7680 pod_ready.go:93] pod "kube-controller-manager-ha-061400" in "kube-system" namespace has status "Ready":"True"
	I0408 23:57:14.926303    7680 pod_ready.go:82] duration metric: took 399.743ms for pod "kube-controller-manager-ha-061400" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:14.926303    7680 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-061400-m02" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:15.120682    7680 request.go:661] Waited for 194.3764ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-061400-m02
	I0408 23:57:15.121180    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-061400-m02
	I0408 23:57:15.121180    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:15.121180    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:15.121180    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:15.131104    7680 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0408 23:57:15.321096    7680 request.go:661] Waited for 189.4134ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:57:15.321096    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:57:15.321565    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:15.321565    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:15.321565    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:15.327292    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:57:15.327580    7680 pod_ready.go:93] pod "kube-controller-manager-ha-061400-m02" in "kube-system" namespace has status "Ready":"True"
	I0408 23:57:15.327660    7680 pod_ready.go:82] duration metric: took 401.3517ms for pod "kube-controller-manager-ha-061400-m02" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:15.327660    7680 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-061400-m03" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:15.521156    7680 request.go:661] Waited for 193.3592ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-061400-m03
	I0408 23:57:15.521596    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-061400-m03
	I0408 23:57:15.521596    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:15.521596    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:15.521596    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:15.526703    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:15.721299    7680 request.go:661] Waited for 194.0271ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:15.721299    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:15.721299    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:15.721299    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:15.721299    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:15.727229    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:15.727305    7680 pod_ready.go:93] pod "kube-controller-manager-ha-061400-m03" in "kube-system" namespace has status "Ready":"True"
	I0408 23:57:15.727858    7680 pod_ready.go:82] duration metric: took 399.6396ms for pod "kube-controller-manager-ha-061400-m03" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:15.727858    7680 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lr9jb" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:15.920869    7680 request.go:661] Waited for 193.0086ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lr9jb
	I0408 23:57:15.920869    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lr9jb
	I0408 23:57:15.920869    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:15.920869    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:15.920869    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:15.925982    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:16.120766    7680 request.go:661] Waited for 193.5359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:57:16.121267    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:57:16.121297    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:16.121297    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:16.121297    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:16.129550    7680 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0408 23:57:16.129550    7680 pod_ready.go:93] pod "kube-proxy-lr9jb" in "kube-system" namespace has status "Ready":"True"
	I0408 23:57:16.129550    7680 pod_ready.go:82] duration metric: took 401.687ms for pod "kube-proxy-lr9jb" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:16.129550    7680 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nkwqr" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:16.320926    7680 request.go:661] Waited for 191.3731ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nkwqr
	I0408 23:57:16.320926    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nkwqr
	I0408 23:57:16.320926    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:16.320926    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:16.320926    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:16.336757    7680 round_trippers.go:581] Response Status: 200 OK in 15 milliseconds
	I0408 23:57:16.520832    7680 request.go:661] Waited for 183.4389ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:57:16.520832    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:57:16.520832    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:16.520832    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:16.520832    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:16.526311    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:16.526951    7680 pod_ready.go:93] pod "kube-proxy-nkwqr" in "kube-system" namespace has status "Ready":"True"
	I0408 23:57:16.526951    7680 pod_ready.go:82] duration metric: took 397.3952ms for pod "kube-proxy-nkwqr" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:16.527069    7680 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rl7bv" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:16.720489    7680 request.go:661] Waited for 193.4175ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rl7bv
	I0408 23:57:16.720943    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rl7bv
	I0408 23:57:16.720943    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:16.720943    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:16.720943    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:16.726569    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:16.920477    7680 request.go:661] Waited for 193.3957ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:16.920477    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:16.920477    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:16.920477    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:16.920477    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:16.925713    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:16.926947    7680 pod_ready.go:93] pod "kube-proxy-rl7bv" in "kube-system" namespace has status "Ready":"True"
	I0408 23:57:16.926947    7680 pod_ready.go:82] duration metric: took 399.8726ms for pod "kube-proxy-rl7bv" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:16.926947    7680 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-061400" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:17.120839    7680 request.go:661] Waited for 193.8895ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-061400
	I0408 23:57:17.121442    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-061400
	I0408 23:57:17.121535    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:17.121535    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:17.121535    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:17.128306    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:57:17.320560    7680 request.go:661] Waited for 191.7976ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:57:17.320560    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400
	I0408 23:57:17.320560    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:17.320560    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:17.320560    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:17.326905    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:57:17.326905    7680 pod_ready.go:93] pod "kube-scheduler-ha-061400" in "kube-system" namespace has status "Ready":"True"
	I0408 23:57:17.327464    7680 pod_ready.go:82] duration metric: took 400.5121ms for pod "kube-scheduler-ha-061400" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:17.327565    7680 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-061400-m02" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:17.520747    7680 request.go:661] Waited for 193.1797ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-061400-m02
	I0408 23:57:17.521181    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-061400-m02
	I0408 23:57:17.521270    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:17.521270    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:17.521270    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:17.530002    7680 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0408 23:57:17.720284    7680 request.go:661] Waited for 190.2794ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:57:17.720284    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m02
	I0408 23:57:17.720284    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:17.720284    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:17.720284    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:17.725244    7680 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0408 23:57:17.725244    7680 pod_ready.go:93] pod "kube-scheduler-ha-061400-m02" in "kube-system" namespace has status "Ready":"True"
	I0408 23:57:17.725244    7680 pod_ready.go:82] duration metric: took 397.6735ms for pod "kube-scheduler-ha-061400-m02" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:17.725244    7680 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-061400-m03" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:17.920794    7680 request.go:661] Waited for 194.7756ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-061400-m03
	I0408 23:57:17.920794    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-061400-m03
	I0408 23:57:17.920794    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:17.921479    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:17.921479    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:17.927343    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:18.120648    7680 request.go:661] Waited for 192.7748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:18.120648    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes/ha-061400-m03
	I0408 23:57:18.120990    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:18.120990    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:18.120990    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:18.138457    7680 round_trippers.go:581] Response Status: 200 OK in 17 milliseconds
	I0408 23:57:18.138997    7680 pod_ready.go:93] pod "kube-scheduler-ha-061400-m03" in "kube-system" namespace has status "Ready":"True"
	I0408 23:57:18.138997    7680 pod_ready.go:82] duration metric: took 413.7477ms for pod "kube-scheduler-ha-061400-m03" in "kube-system" namespace to be "Ready" ...
	I0408 23:57:18.138997    7680 pod_ready.go:39] duration metric: took 5.2113205s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0408 23:57:18.138997    7680 api_server.go:52] waiting for apiserver process to appear ...
	I0408 23:57:18.151004    7680 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0408 23:57:18.182192    7680 api_server.go:72] duration metric: took 24.7338589s to wait for apiserver process to appear ...
	I0408 23:57:18.182192    7680 api_server.go:88] waiting for apiserver healthz status ...
	I0408 23:57:18.183766    7680 api_server.go:253] Checking apiserver healthz at https://192.168.119.206:8443/healthz ...
	I0408 23:57:18.193479    7680 api_server.go:279] https://192.168.119.206:8443/healthz returned 200:
	ok
	I0408 23:57:18.193479    7680 round_trippers.go:470] GET https://192.168.119.206:8443/version
	I0408 23:57:18.193479    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:18.193479    7680 round_trippers.go:480]     Accept: application/json, */*
	I0408 23:57:18.193479    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:18.199841    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:57:18.199841    7680 api_server.go:141] control plane version: v1.32.2
	I0408 23:57:18.199841    7680 api_server.go:131] duration metric: took 16.0745ms to wait for apiserver health ...
	I0408 23:57:18.199841    7680 system_pods.go:43] waiting for kube-system pods to appear ...
	I0408 23:57:18.321193    7680 request.go:661] Waited for 120.6978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods
	I0408 23:57:18.321519    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods
	I0408 23:57:18.321519    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:18.321519    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:18.321519    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:18.330127    7680 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0408 23:57:18.333349    7680 system_pods.go:59] 24 kube-system pods found
	I0408 23:57:18.333349    7680 system_pods.go:61] "coredns-668d6bf9bc-rzk8c" [18f6703f-34ad-403f-b86d-9a8f3dc927a0] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "coredns-668d6bf9bc-scvcr" [952efdd7-d201-4747-833a-59e05925e74f] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "etcd-ha-061400" [429dfaa4-c9bf-47dc-81f9-ab33ad3acee4] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "etcd-ha-061400-m02" [5fa6b2de-e3e8-4c95-84e3-3e344ce6a56f] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "etcd-ha-061400-m03" [9cfea750-78b9-4595-8046-cca9379d4651] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "kindnet-44mc6" [a8a857e1-90f1-4346-97a7-0b083352aeda] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "kindnet-7mvqz" [3fcc4494-1878-48e2-97ee-f76dcff55c29] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "kindnet-d8bcw" [020b9216-ff50-4ac1-9c3e-d6b836c42ecf] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "kube-apiserver-ha-061400" [488f7097-53fd-4754-aa77-78aed24b3494] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "kube-apiserver-ha-061400-m02" [1f83551d-39c0-4485-b4a6-d44c3e58b435] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "kube-apiserver-ha-061400-m03" [69794402-115c-4ba3-a9e9-35d1f59b5a46] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "kube-controller-manager-ha-061400" [28c1163e-e283-49b0-bab7-b91d1b73ab27] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "kube-controller-manager-ha-061400-m02" [89ab7c55-91a9-452b-9c0e-3673bf608abc] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "kube-controller-manager-ha-061400-m03" [d4a82363-d392-4806-bdec-5e370db14a21] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "kube-proxy-lr9jb" [4ea29fd2-fb54-44d7-a558-a272fd4f05f5] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "kube-proxy-nkwqr" [20f509f0-ca9e-4464-b87f-e5d226ce9e3c] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "kube-proxy-rl7bv" [d928bc40-4dcd-47d2-9c7a-b41237c0b070] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "kube-scheduler-ha-061400" [b16bc563-a6aa-49d3-b7c4-74b5827bb66e] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "kube-scheduler-ha-061400-m02" [e9a386c4-fe99-49d0-bff9-d434ba81d735] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "kube-scheduler-ha-061400-m03" [db70e2f5-39bf-42ee-826f-6643dc5fc79a] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "kube-vip-ha-061400" [b677e4c1-39bf-459c-a33c-ecfce817e2a5] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "kube-vip-ha-061400-m02" [2a30dc1d-3208-468f-8614-a469337f5ac2] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "kube-vip-ha-061400-m03" [e3b9b5ad-7566-45c6-9a8f-2be704a0b6c0] Running
	I0408 23:57:18.333349    7680 system_pods.go:61] "storage-provisioner" [bd11797d-cec8-419e-b7e9-1d537d9a7378] Running
	I0408 23:57:18.333349    7680 system_pods.go:74] duration metric: took 133.5058ms to wait for pod list to return data ...
	I0408 23:57:18.333349    7680 default_sa.go:34] waiting for default service account to be created ...
	I0408 23:57:18.521896    7680 request.go:661] Waited for 188.5453ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/default/serviceaccounts
	I0408 23:57:18.522284    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/default/serviceaccounts
	I0408 23:57:18.522284    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:18.522284    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:18.522284    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:18.527858    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:18.528198    7680 default_sa.go:45] found service account: "default"
	I0408 23:57:18.528313    7680 default_sa.go:55] duration metric: took 194.9048ms for default service account to be created ...
	I0408 23:57:18.528369    7680 system_pods.go:116] waiting for k8s-apps to be running ...
	I0408 23:57:18.720379    7680 request.go:661] Waited for 191.958ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods
	I0408 23:57:18.720379    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/namespaces/kube-system/pods
	I0408 23:57:18.720379    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:18.720379    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:18.720379    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:18.725422    7680 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0408 23:57:18.729479    7680 system_pods.go:86] 24 kube-system pods found
	I0408 23:57:18.729545    7680 system_pods.go:89] "coredns-668d6bf9bc-rzk8c" [18f6703f-34ad-403f-b86d-9a8f3dc927a0] Running
	I0408 23:57:18.729545    7680 system_pods.go:89] "coredns-668d6bf9bc-scvcr" [952efdd7-d201-4747-833a-59e05925e74f] Running
	I0408 23:57:18.729545    7680 system_pods.go:89] "etcd-ha-061400" [429dfaa4-c9bf-47dc-81f9-ab33ad3acee4] Running
	I0408 23:57:18.729545    7680 system_pods.go:89] "etcd-ha-061400-m02" [5fa6b2de-e3e8-4c95-84e3-3e344ce6a56f] Running
	I0408 23:57:18.729545    7680 system_pods.go:89] "etcd-ha-061400-m03" [9cfea750-78b9-4595-8046-cca9379d4651] Running
	I0408 23:57:18.729545    7680 system_pods.go:89] "kindnet-44mc6" [a8a857e1-90f1-4346-97a7-0b083352aeda] Running
	I0408 23:57:18.729545    7680 system_pods.go:89] "kindnet-7mvqz" [3fcc4494-1878-48e2-97ee-f76dcff55c29] Running
	I0408 23:57:18.729545    7680 system_pods.go:89] "kindnet-d8bcw" [020b9216-ff50-4ac1-9c3e-d6b836c42ecf] Running
	I0408 23:57:18.729545    7680 system_pods.go:89] "kube-apiserver-ha-061400" [488f7097-53fd-4754-aa77-78aed24b3494] Running
	I0408 23:57:18.729630    7680 system_pods.go:89] "kube-apiserver-ha-061400-m02" [1f83551d-39c0-4485-b4a6-d44c3e58b435] Running
	I0408 23:57:18.729630    7680 system_pods.go:89] "kube-apiserver-ha-061400-m03" [69794402-115c-4ba3-a9e9-35d1f59b5a46] Running
	I0408 23:57:18.729630    7680 system_pods.go:89] "kube-controller-manager-ha-061400" [28c1163e-e283-49b0-bab7-b91d1b73ab27] Running
	I0408 23:57:18.729630    7680 system_pods.go:89] "kube-controller-manager-ha-061400-m02" [89ab7c55-91a9-452b-9c0e-3673bf608abc] Running
	I0408 23:57:18.729630    7680 system_pods.go:89] "kube-controller-manager-ha-061400-m03" [d4a82363-d392-4806-bdec-5e370db14a21] Running
	I0408 23:57:18.729630    7680 system_pods.go:89] "kube-proxy-lr9jb" [4ea29fd2-fb54-44d7-a558-a272fd4f05f5] Running
	I0408 23:57:18.729711    7680 system_pods.go:89] "kube-proxy-nkwqr" [20f509f0-ca9e-4464-b87f-e5d226ce9e3c] Running
	I0408 23:57:18.729711    7680 system_pods.go:89] "kube-proxy-rl7bv" [d928bc40-4dcd-47d2-9c7a-b41237c0b070] Running
	I0408 23:57:18.729711    7680 system_pods.go:89] "kube-scheduler-ha-061400" [b16bc563-a6aa-49d3-b7c4-74b5827bb66e] Running
	I0408 23:57:18.729711    7680 system_pods.go:89] "kube-scheduler-ha-061400-m02" [e9a386c4-fe99-49d0-bff9-d434ba81d735] Running
	I0408 23:57:18.729711    7680 system_pods.go:89] "kube-scheduler-ha-061400-m03" [db70e2f5-39bf-42ee-826f-6643dc5fc79a] Running
	I0408 23:57:18.729711    7680 system_pods.go:89] "kube-vip-ha-061400" [b677e4c1-39bf-459c-a33c-ecfce817e2a5] Running
	I0408 23:57:18.729711    7680 system_pods.go:89] "kube-vip-ha-061400-m02" [2a30dc1d-3208-468f-8614-a469337f5ac2] Running
	I0408 23:57:18.729711    7680 system_pods.go:89] "kube-vip-ha-061400-m03" [e3b9b5ad-7566-45c6-9a8f-2be704a0b6c0] Running
	I0408 23:57:18.729780    7680 system_pods.go:89] "storage-provisioner" [bd11797d-cec8-419e-b7e9-1d537d9a7378] Running
	I0408 23:57:18.729780    7680 system_pods.go:126] duration metric: took 201.3859ms to wait for k8s-apps to be running ...
	I0408 23:57:18.729780    7680 system_svc.go:44] waiting for kubelet service to be running ....
	I0408 23:57:18.741726    7680 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0408 23:57:18.769079    7680 system_svc.go:56] duration metric: took 39.2989ms WaitForService to wait for kubelet
	I0408 23:57:18.769079    7680 kubeadm.go:582] duration metric: took 25.320738s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0408 23:57:18.769164    7680 node_conditions.go:102] verifying NodePressure condition ...
	I0408 23:57:18.920700    7680 request.go:661] Waited for 151.3877ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.119.206:8443/api/v1/nodes
	I0408 23:57:18.921257    7680 round_trippers.go:470] GET https://192.168.119.206:8443/api/v1/nodes
	I0408 23:57:18.921257    7680 round_trippers.go:476] Request Headers:
	I0408 23:57:18.921257    7680 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0408 23:57:18.921257    7680 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0408 23:57:18.927755    7680 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0408 23:57:18.928537    7680 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 23:57:18.928610    7680 node_conditions.go:123] node cpu capacity is 2
	I0408 23:57:18.928610    7680 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 23:57:18.928610    7680 node_conditions.go:123] node cpu capacity is 2
	I0408 23:57:18.928610    7680 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0408 23:57:18.928610    7680 node_conditions.go:123] node cpu capacity is 2
	I0408 23:57:18.928610    7680 node_conditions.go:105] duration metric: took 159.4445ms to run NodePressure ...
	I0408 23:57:18.928670    7680 start.go:241] waiting for startup goroutines ...
	I0408 23:57:18.928748    7680 start.go:255] writing updated cluster config ...
	I0408 23:57:18.940095    7680 ssh_runner.go:195] Run: rm -f paused
	I0408 23:57:19.092799    7680 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0408 23:57:19.100210    7680 out.go:177] * Done! kubectl is now configured to use "ha-061400" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 08 23:49:40 ha-061400 cri-dockerd[1341]: time="2025-04-08T23:49:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/053f18a3f15a430b334c18647767c96e5c9aefa0d49ff7988c41dd94ebb1ef84/resolv.conf as [nameserver 192.168.112.1]"
	Apr 08 23:49:40 ha-061400 cri-dockerd[1341]: time="2025-04-08T23:49:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b970cca1abdffe883ab712bc2a9ff00c9e99300ea86bd493b95b8002eb151801/resolv.conf as [nameserver 192.168.112.1]"
	Apr 08 23:49:40 ha-061400 cri-dockerd[1341]: time="2025-04-08T23:49:40Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1044fd2112454762d545724c0d174d35c038a14ed69086f711c54fa6c5f2007c/resolv.conf as [nameserver 192.168.112.1]"
	Apr 08 23:49:40 ha-061400 dockerd[1451]: time="2025-04-08T23:49:40.388606126Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:49:40 ha-061400 dockerd[1451]: time="2025-04-08T23:49:40.388688926Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:49:40 ha-061400 dockerd[1451]: time="2025-04-08T23:49:40.388706026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:49:40 ha-061400 dockerd[1451]: time="2025-04-08T23:49:40.388956927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:49:40 ha-061400 dockerd[1451]: time="2025-04-08T23:49:40.664557835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:49:40 ha-061400 dockerd[1451]: time="2025-04-08T23:49:40.664810536Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:49:40 ha-061400 dockerd[1451]: time="2025-04-08T23:49:40.665069637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:49:40 ha-061400 dockerd[1451]: time="2025-04-08T23:49:40.665665939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:49:40 ha-061400 dockerd[1451]: time="2025-04-08T23:49:40.742539220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:49:40 ha-061400 dockerd[1451]: time="2025-04-08T23:49:40.742700221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:49:40 ha-061400 dockerd[1451]: time="2025-04-08T23:49:40.742912522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:49:40 ha-061400 dockerd[1451]: time="2025-04-08T23:49:40.743810825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:57:58 ha-061400 dockerd[1451]: time="2025-04-08T23:57:58.164652902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:57:58 ha-061400 dockerd[1451]: time="2025-04-08T23:57:58.164937104Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:57:58 ha-061400 dockerd[1451]: time="2025-04-08T23:57:58.164976104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:57:58 ha-061400 dockerd[1451]: time="2025-04-08T23:57:58.166031911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:57:58 ha-061400 cri-dockerd[1341]: time="2025-04-08T23:57:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/848887aaa74b44a80c763209316ef88ccb828e4339ad2d5c404a66fcf26117af/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 08 23:58:00 ha-061400 cri-dockerd[1341]: time="2025-04-08T23:58:00Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 08 23:58:00 ha-061400 dockerd[1451]: time="2025-04-08T23:58:00.367046503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 08 23:58:00 ha-061400 dockerd[1451]: time="2025-04-08T23:58:00.367216405Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 08 23:58:00 ha-061400 dockerd[1451]: time="2025-04-08T23:58:00.367862013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 08 23:58:00 ha-061400 dockerd[1451]: time="2025-04-08T23:58:00.368496120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a9e84d4448026       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago      Running             busybox                   0                   848887aaa74b4       busybox-58667487b6-8xfwm
	fa7952995b810       c69fa2e9cbf5f                                                                                         26 minutes ago      Running             coredns                   0                   1044fd2112454       coredns-668d6bf9bc-rzk8c
	ac90a50565e40       c69fa2e9cbf5f                                                                                         26 minutes ago      Running             coredns                   0                   b970cca1abdff       coredns-668d6bf9bc-scvcr
	cb7647ddff9e9       6e38f40d628db                                                                                         26 minutes ago      Running             storage-provisioner       0                   053f18a3f15a4       storage-provisioner
	f72554e173731       kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495              27 minutes ago      Running             kindnet-cni               0                   5f2e5e183eeaa       kindnet-44mc6
	231ada3088443       f1332858868e1                                                                                         27 minutes ago      Running             kube-proxy                0                   cfd0b3b4da1c5       kube-proxy-lr9jb
	697735ce06c27       ghcr.io/kube-vip/kube-vip@sha256:e01c90bcdd3eb37a46aaf04f6c86cca3e66dd0db7a231f3c8e8aa105635c158a     27 minutes ago      Running             kube-vip                  0                   70109836f70c1       kube-vip-ha-061400
	cd88701b3604f       b6a454c5a800d                                                                                         27 minutes ago      Running             kube-controller-manager   0                   abf0986ea8b52       kube-controller-manager-ha-061400
	73e54c2230f8c       a9e7e6b294baf                                                                                         27 minutes ago      Running             etcd                      0                   6f42583efa51d       etcd-ha-061400
	327b3e42a6dbb       d8e673e7c9983                                                                                         27 minutes ago      Running             kube-scheduler            0                   1dd3407ceda46       kube-scheduler-ha-061400
	f7ba71d60c8f5       85b7a174738ba                                                                                         27 minutes ago      Running             kube-apiserver            0                   a8b448e178628       kube-apiserver-ha-061400
	
	
	==> coredns [ac90a50565e4] <==
	[INFO] 10.244.0.4:58231 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000371404s
	[INFO] 10.244.0.4:50702 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000219003s
	[INFO] 10.244.2.3:55353 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000259603s
	[INFO] 10.244.2.3:42915 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000186002s
	[INFO] 10.244.2.3:48660 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000365805s
	[INFO] 10.244.2.3:36593 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000163802s
	[INFO] 10.244.2.2:52254 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000150102s
	[INFO] 10.244.2.2:46234 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000192702s
	[INFO] 10.244.2.2:60163 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080101s
	[INFO] 10.244.0.4:43940 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000284903s
	[INFO] 10.244.0.4:40825 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000481805s
	[INFO] 10.244.2.3:51991 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191003s
	[INFO] 10.244.2.3:43007 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142702s
	[INFO] 10.244.2.3:37819 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067201s
	[INFO] 10.244.2.2:46496 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000220903s
	[INFO] 10.244.2.2:34047 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000330104s
	[INFO] 10.244.2.2:45982 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000131101s
	[INFO] 10.244.0.4:58806 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000303904s
	[INFO] 10.244.0.4:55429 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000246702s
	[INFO] 10.244.0.4:55415 - 5 "PTR IN 1.112.168.192.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd 106 0.000183102s
	[INFO] 10.244.2.3:41378 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000242503s
	[INFO] 10.244.2.3:42150 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000210902s
	[INFO] 10.244.2.3:48171 - 5 "PTR IN 1.112.168.192.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd 106 0.000087501s
	[INFO] 10.244.2.2:36492 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000769709s
	[INFO] 10.244.2.2:60128 - 5 "PTR IN 1.112.168.192.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd 106 0.000074201s
	
	
	==> coredns [fa7952995b81] <==
	[INFO] 10.244.0.4:53950 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.182613825s
	[INFO] 10.244.2.3:45134 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.001231715s
	[INFO] 10.244.2.2:48114 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160202s
	[INFO] 10.244.2.2:41318 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000343204s
	[INFO] 10.244.2.2:42769 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.32989453s
	[INFO] 10.244.0.4:55193 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000239002s
	[INFO] 10.244.0.4:35997 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.024457283s
	[INFO] 10.244.0.4:45383 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000183302s
	[INFO] 10.244.2.3:48577 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000093201s
	[INFO] 10.244.2.3:41996 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000122501s
	[INFO] 10.244.2.3:52550 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.051362995s
	[INFO] 10.244.2.3:35001 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000235703s
	[INFO] 10.244.2.2:41847 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000175303s
	[INFO] 10.244.2.2:41365 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000095201s
	[INFO] 10.244.2.2:57717 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000145002s
	[INFO] 10.244.2.2:58572 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000212003s
	[INFO] 10.244.2.2:59561 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000065801s
	[INFO] 10.244.0.4:37240 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000140902s
	[INFO] 10.244.0.4:45692 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000197203s
	[INFO] 10.244.2.3:32983 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000142402s
	[INFO] 10.244.2.2:43492 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125701s
	[INFO] 10.244.0.4:50466 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000134702s
	[INFO] 10.244.2.3:46680 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000414104s
	[INFO] 10.244.2.2:53559 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000228303s
	[INFO] 10.244.2.2:53088 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000076001s
	
	
	==> describe nodes <==
	Name:               ha-061400
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-061400
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd2f4c3eba2bd452b5997c855e28d0966165ba83
	                    minikube.k8s.io/name=ha-061400
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_08T23_49_12_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 08 Apr 2025 23:49:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-061400
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Apr 2025 00:16:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Apr 2025 00:13:58 +0000   Tue, 08 Apr 2025 23:49:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Apr 2025 00:13:58 +0000   Tue, 08 Apr 2025 23:49:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Apr 2025 00:13:58 +0000   Tue, 08 Apr 2025 23:49:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Apr 2025 00:13:58 +0000   Tue, 08 Apr 2025 23:49:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.119.206
	  Hostname:    ha-061400
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 b3d330f5715e45408e02849423800390
	  System UUID:                3aad7807-a96f-3942-abc1-aa927c98bb39
	  Boot ID:                    9ecd26fe-65e5-41d9-ac46-3435dfdf7d65
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-8xfwm             0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 coredns-668d6bf9bc-rzk8c             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     27m
	  kube-system                 coredns-668d6bf9bc-scvcr             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     27m
	  kube-system                 etcd-ha-061400                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         27m
	  kube-system                 kindnet-44mc6                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      27m
	  kube-system                 kube-apiserver-ha-061400             250m (12%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-controller-manager-ha-061400    200m (10%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-proxy-lr9jb                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-scheduler-ha-061400             100m (5%)     0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-vip-ha-061400                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27m   kube-proxy       
	  Normal  Starting                 27m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  27m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  27m   kubelet          Node ha-061400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m   kubelet          Node ha-061400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m   kubelet          Node ha-061400 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27m   node-controller  Node ha-061400 event: Registered Node ha-061400 in Controller
	  Normal  NodeReady                26m   kubelet          Node ha-061400 status is now: NodeReady
	  Normal  RegisteredNode           23m   node-controller  Node ha-061400 event: Registered Node ha-061400 in Controller
	  Normal  RegisteredNode           19m   node-controller  Node ha-061400 event: Registered Node ha-061400 in Controller
	
	
	Name:               ha-061400-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-061400-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd2f4c3eba2bd452b5997c855e28d0966165ba83
	                    minikube.k8s.io/name=ha-061400
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_04_08T23_52_53_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 08 Apr 2025 23:52:47 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-061400-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Apr 2025 00:15:25 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 09 Apr 2025 00:11:06 +0000   Wed, 09 Apr 2025 00:16:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 09 Apr 2025 00:11:06 +0000   Wed, 09 Apr 2025 00:16:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 09 Apr 2025 00:11:06 +0000   Wed, 09 Apr 2025 00:16:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 09 Apr 2025 00:11:06 +0000   Wed, 09 Apr 2025 00:16:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.118.215
	  Hostname:    ha-061400-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 af8c08d5c0dd43b89d14f9b41ee99f4d
	  System UUID:                dfbd2a65-43f9-ef48-83c1-f4a679e65267
	  Boot ID:                    2ff197ae-e86f-40f6-a2e5-ef6f3f5aea9a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-061400-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         23m
	  kube-system                 kindnet-7mvqz                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      23m
	  kube-system                 kube-apiserver-ha-061400-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-controller-manager-ha-061400-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-proxy-nkwqr                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-scheduler-ha-061400-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-vip-ha-061400-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node ha-061400-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node ha-061400-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node ha-061400-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           23m                node-controller  Node ha-061400-m02 event: Registered Node ha-061400-m02 in Controller
	  Normal  RegisteredNode           23m                node-controller  Node ha-061400-m02 event: Registered Node ha-061400-m02 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-061400-m02 event: Registered Node ha-061400-m02 in Controller
	  Normal  NodeNotReady             8s                 node-controller  Node ha-061400-m02 status is now: NodeNotReady
	
	
	Name:               ha-061400-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-061400-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd2f4c3eba2bd452b5997c855e28d0966165ba83
	                    minikube.k8s.io/name=ha-061400
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_04_08T23_56_53_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 08 Apr 2025 23:56:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-061400-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Apr 2025 00:16:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Apr 2025 00:13:07 +0000   Tue, 08 Apr 2025 23:56:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Apr 2025 00:13:07 +0000   Tue, 08 Apr 2025 23:56:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Apr 2025 00:13:07 +0000   Tue, 08 Apr 2025 23:56:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Apr 2025 00:13:07 +0000   Tue, 08 Apr 2025 23:57:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.126.102
	  Hostname:    ha-061400-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 9278cc1050a0476f97f6d184e6bf83da
	  System UUID:                b21adb76-59a6-864d-b150-09cc92d14a3f
	  Boot ID:                    3dcbf462-4d94-4440-bbfb-e532cb8d8109
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-rjkqv                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  default                     busybox-58667487b6-rxp4w                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 etcd-ha-061400-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         19m
	  kube-system                 kindnet-d8bcw                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      19m
	  kube-system                 kube-apiserver-ha-061400-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-ha-061400-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-rl7bv                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-ha-061400-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-vip-ha-061400-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node ha-061400-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node ha-061400-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node ha-061400-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node ha-061400-m03 event: Registered Node ha-061400-m03 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-061400-m03 event: Registered Node ha-061400-m03 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-061400-m03 event: Registered Node ha-061400-m03 in Controller
	
	
	Name:               ha-061400-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-061400-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd2f4c3eba2bd452b5997c855e28d0966165ba83
	                    minikube.k8s.io/name=ha-061400
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_04_09T00_02_22_0700
	                    minikube.k8s.io/version=v1.35.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Apr 2025 00:02:22 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-061400-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Apr 2025 00:16:19 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 09 Apr 2025 00:11:42 +0000   Wed, 09 Apr 2025 00:16:19 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 09 Apr 2025 00:11:42 +0000   Wed, 09 Apr 2025 00:16:19 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 09 Apr 2025 00:11:42 +0000   Wed, 09 Apr 2025 00:16:19 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 09 Apr 2025 00:11:42 +0000   Wed, 09 Apr 2025 00:16:19 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.118.226
	  Hostname:    ha-061400-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 e187ed9a56ad4966b931dda81274e89f
	  System UUID:                336f6b82-af4f-7244-8df8-ddc85455b058
	  Boot ID:                    5f3aa358-97e3-4ffd-8650-2b8fb0846218
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2xp82       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-proxy-l8fgj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m (x2 over 14m)  kubelet          Node ha-061400-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x2 over 14m)  kubelet          Node ha-061400-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x2 over 14m)  kubelet          Node ha-061400-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node ha-061400-m04 event: Registered Node ha-061400-m04 in Controller
	  Normal  RegisteredNode           14m                node-controller  Node ha-061400-m04 event: Registered Node ha-061400-m04 in Controller
	  Normal  RegisteredNode           14m                node-controller  Node ha-061400-m04 event: Registered Node ha-061400-m04 in Controller
	  Normal  NodeReady                13m                kubelet          Node ha-061400-m04 status is now: NodeReady
	  Normal  NodeNotReady             7s                 node-controller  Node ha-061400-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +7.348956] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr 8 23:48] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.161005] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[ +30.446502] systemd-fstab-generator[1007]: Ignoring "noauto" option for root device
	[  +0.106580] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.536755] systemd-fstab-generator[1046]: Ignoring "noauto" option for root device
	[  +0.218326] systemd-fstab-generator[1058]: Ignoring "noauto" option for root device
	[  +0.228069] systemd-fstab-generator[1072]: Ignoring "noauto" option for root device
	[  +2.927372] systemd-fstab-generator[1294]: Ignoring "noauto" option for root device
	[  +0.218960] systemd-fstab-generator[1306]: Ignoring "noauto" option for root device
	[  +0.208393] systemd-fstab-generator[1318]: Ignoring "noauto" option for root device
	[  +0.272770] systemd-fstab-generator[1333]: Ignoring "noauto" option for root device
	[ +11.337963] systemd-fstab-generator[1436]: Ignoring "noauto" option for root device
	[  +0.128861] kauditd_printk_skb: 206 callbacks suppressed
	[  +3.599899] systemd-fstab-generator[1703]: Ignoring "noauto" option for root device
	[  +6.562171] systemd-fstab-generator[1856]: Ignoring "noauto" option for root device
	[  +0.102752] kauditd_printk_skb: 74 callbacks suppressed
	[Apr 8 23:49] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.364803] systemd-fstab-generator[2382]: Ignoring "noauto" option for root device
	[  +6.262488] kauditd_printk_skb: 17 callbacks suppressed
	[  +7.472028] kauditd_printk_skb: 29 callbacks suppressed
	[Apr 8 23:52] hrtimer: interrupt took 3515625 ns
	[ +53.513520] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [73e54c2230f8] <==
	{"level":"warn","ts":"2025-04-09T00:16:26.380740Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ab5492bf637f55c","from":"9ab5492bf637f55c","remote-peer-id":"6fcf97e478b2d03e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-09T00:16:26.386491Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ab5492bf637f55c","from":"9ab5492bf637f55c","remote-peer-id":"6fcf97e478b2d03e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-09T00:16:26.389307Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ab5492bf637f55c","from":"9ab5492bf637f55c","remote-peer-id":"6fcf97e478b2d03e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-09T00:16:26.394470Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ab5492bf637f55c","from":"9ab5492bf637f55c","remote-peer-id":"6fcf97e478b2d03e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-09T00:16:26.398746Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ab5492bf637f55c","from":"9ab5492bf637f55c","remote-peer-id":"6fcf97e478b2d03e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-09T00:16:26.403613Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ab5492bf637f55c","from":"9ab5492bf637f55c","remote-peer-id":"6fcf97e478b2d03e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-09T00:16:26.414065Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ab5492bf637f55c","from":"9ab5492bf637f55c","remote-peer-id":"6fcf97e478b2d03e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-09T00:16:26.418136Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ab5492bf637f55c","from":"9ab5492bf637f55c","remote-peer-id":"6fcf97e478b2d03e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-09T00:16:26.419581Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ab5492bf637f55c","from":"9ab5492bf637f55c","remote-peer-id":"6fcf97e478b2d03e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-09T00:16:26.429780Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ab5492bf637f55c","from":"9ab5492bf637f55c","remote-peer-id":"6fcf97e478b2d03e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-09T00:16:26.436322Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ab5492bf637f55c","from":"9ab5492bf637f55c","remote-peer-id":"6fcf97e478b2d03e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-09T00:16:26.441295Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ab5492bf637f55c","from":"9ab5492bf637f55c","remote-peer-id":"6fcf97e478b2d03e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-09T00:16:26.448649Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ab5492bf637f55c","from":"9ab5492bf637f55c","remote-peer-id":"6fcf97e478b2d03e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-09T00:16:26.457349Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ab5492bf637f55c","from":"9ab5492bf637f55c","remote-peer-id":"6fcf97e478b2d03e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-09T00:16:26.460684Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ab5492bf637f55c","from":"9ab5492bf637f55c","remote-peer-id":"6fcf97e478b2d03e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-09T00:16:26.466496Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ab5492bf637f55c","from":"9ab5492bf637f55c","remote-peer-id":"6fcf97e478b2d03e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-09T00:16:26.473760Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ab5492bf637f55c","from":"9ab5492bf637f55c","remote-peer-id":"6fcf97e478b2d03e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-09T00:16:26.477504Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ab5492bf637f55c","from":"9ab5492bf637f55c","remote-peer-id":"6fcf97e478b2d03e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-09T00:16:26.481256Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ab5492bf637f55c","from":"9ab5492bf637f55c","remote-peer-id":"6fcf97e478b2d03e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-09T00:16:26.486757Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ab5492bf637f55c","from":"9ab5492bf637f55c","remote-peer-id":"6fcf97e478b2d03e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-09T00:16:26.490553Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ab5492bf637f55c","from":"9ab5492bf637f55c","remote-peer-id":"6fcf97e478b2d03e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-09T00:16:26.499953Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ab5492bf637f55c","from":"9ab5492bf637f55c","remote-peer-id":"6fcf97e478b2d03e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-09T00:16:26.534304Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ab5492bf637f55c","from":"9ab5492bf637f55c","remote-peer-id":"6fcf97e478b2d03e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-09T00:16:26.586620Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ab5492bf637f55c","from":"9ab5492bf637f55c","remote-peer-id":"6fcf97e478b2d03e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-09T00:16:26.586985Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9ab5492bf637f55c","from":"9ab5492bf637f55c","remote-peer-id":"6fcf97e478b2d03e","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 00:16:26 up 29 min,  0 users,  load average: 0.31, 0.55, 0.45
	Linux ha-061400 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [f72554e17373] <==
	I0409 00:15:55.635069       1 main.go:301] handling current node
	I0409 00:16:05.635032       1 main.go:297] Handling node with IPs: map[192.168.119.206:{}]
	I0409 00:16:05.635354       1 main.go:301] handling current node
	I0409 00:16:05.637064       1 main.go:297] Handling node with IPs: map[192.168.118.215:{}]
	I0409 00:16:05.637112       1 main.go:324] Node ha-061400-m02 has CIDR [10.244.1.0/24] 
	I0409 00:16:05.637296       1 main.go:297] Handling node with IPs: map[192.168.126.102:{}]
	I0409 00:16:05.637305       1 main.go:324] Node ha-061400-m03 has CIDR [10.244.2.0/24] 
	I0409 00:16:05.637713       1 main.go:297] Handling node with IPs: map[192.168.118.226:{}]
	I0409 00:16:05.638237       1 main.go:324] Node ha-061400-m04 has CIDR [10.244.3.0/24] 
	I0409 00:16:15.631622       1 main.go:297] Handling node with IPs: map[192.168.119.206:{}]
	I0409 00:16:15.631862       1 main.go:301] handling current node
	I0409 00:16:15.631951       1 main.go:297] Handling node with IPs: map[192.168.118.215:{}]
	I0409 00:16:15.632037       1 main.go:324] Node ha-061400-m02 has CIDR [10.244.1.0/24] 
	I0409 00:16:15.632546       1 main.go:297] Handling node with IPs: map[192.168.126.102:{}]
	I0409 00:16:15.632819       1 main.go:324] Node ha-061400-m03 has CIDR [10.244.2.0/24] 
	I0409 00:16:15.633302       1 main.go:297] Handling node with IPs: map[192.168.118.226:{}]
	I0409 00:16:15.633384       1 main.go:324] Node ha-061400-m04 has CIDR [10.244.3.0/24] 
	I0409 00:16:25.625512       1 main.go:297] Handling node with IPs: map[192.168.118.226:{}]
	I0409 00:16:25.625826       1 main.go:324] Node ha-061400-m04 has CIDR [10.244.3.0/24] 
	I0409 00:16:25.626002       1 main.go:297] Handling node with IPs: map[192.168.119.206:{}]
	I0409 00:16:25.626012       1 main.go:301] handling current node
	I0409 00:16:25.626025       1 main.go:297] Handling node with IPs: map[192.168.118.215:{}]
	I0409 00:16:25.626029       1 main.go:324] Node ha-061400-m02 has CIDR [10.244.1.0/24] 
	I0409 00:16:25.626125       1 main.go:297] Handling node with IPs: map[192.168.126.102:{}]
	I0409 00:16:25.626132       1 main.go:324] Node ha-061400-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [f7ba71d60c8f] <==
	I0408 23:49:11.141029       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0408 23:49:11.177765       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0408 23:49:11.220917       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0408 23:49:14.757769       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0408 23:49:14.871150       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0408 23:56:46.575240       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0408 23:56:46.575358       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 14.4µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0408 23:56:46.576463       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0408 23:56:46.577604       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0408 23:56:46.578958       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="20.917138ms" method="PATCH" path="/api/v1/namespaces/default/events/ha-061400-m03.18347d3120e7d3d8" result=null
	E0408 23:58:04.947664       1 conn.go:339] Error on socket receive: read tcp 192.168.127.254:8443->192.168.112.1:54806: use of closed network connection
	E0408 23:58:05.527144       1 conn.go:339] Error on socket receive: read tcp 192.168.127.254:8443->192.168.112.1:54810: use of closed network connection
	E0408 23:58:07.371787       1 conn.go:339] Error on socket receive: read tcp 192.168.127.254:8443->192.168.112.1:54812: use of closed network connection
	E0408 23:58:07.941292       1 conn.go:339] Error on socket receive: read tcp 192.168.127.254:8443->192.168.112.1:54814: use of closed network connection
	E0408 23:58:08.521650       1 conn.go:339] Error on socket receive: read tcp 192.168.127.254:8443->192.168.112.1:54816: use of closed network connection
	E0408 23:58:09.055570       1 conn.go:339] Error on socket receive: read tcp 192.168.127.254:8443->192.168.112.1:54818: use of closed network connection
	E0408 23:58:09.541335       1 conn.go:339] Error on socket receive: read tcp 192.168.127.254:8443->192.168.112.1:54820: use of closed network connection
	E0408 23:58:10.039595       1 conn.go:339] Error on socket receive: read tcp 192.168.127.254:8443->192.168.112.1:54822: use of closed network connection
	E0408 23:58:10.529783       1 conn.go:339] Error on socket receive: read tcp 192.168.127.254:8443->192.168.112.1:54824: use of closed network connection
	E0408 23:58:11.478313       1 conn.go:339] Error on socket receive: read tcp 192.168.127.254:8443->192.168.112.1:54827: use of closed network connection
	E0408 23:58:22.003095       1 conn.go:339] Error on socket receive: read tcp 192.168.127.254:8443->192.168.112.1:54830: use of closed network connection
	E0408 23:58:22.524565       1 conn.go:339] Error on socket receive: read tcp 192.168.127.254:8443->192.168.112.1:54833: use of closed network connection
	E0408 23:58:33.075009       1 conn.go:339] Error on socket receive: read tcp 192.168.127.254:8443->192.168.112.1:54835: use of closed network connection
	E0408 23:58:33.566498       1 conn.go:339] Error on socket receive: read tcp 192.168.127.254:8443->192.168.112.1:54838: use of closed network connection
	E0408 23:58:44.109837       1 conn.go:339] Error on socket receive: read tcp 192.168.127.254:8443->192.168.112.1:54840: use of closed network connection
	
	
	==> kube-controller-manager [cd88701b3604] <==
	I0409 00:02:24.560013       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-061400-m04"
	I0409 00:02:24.637484       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-061400-m04"
	I0409 00:02:32.387350       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-061400-m04"
	I0409 00:02:50.752472       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-061400-m04"
	I0409 00:02:50.757928       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-061400-m04"
	I0409 00:02:50.803232       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-061400-m04"
	I0409 00:02:52.921128       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-061400-m04"
	I0409 00:02:53.339377       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-061400-m04"
	I0409 00:02:54.482996       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-061400-m03"
	I0409 00:03:45.987871       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-061400"
	I0409 00:06:01.118921       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-061400-m02"
	I0409 00:06:36.794629       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-061400-m04"
	I0409 00:08:01.706835       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-061400-m03"
	I0409 00:08:52.787562       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-061400"
	I0409 00:11:07.007982       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-061400-m02"
	I0409 00:11:42.744159       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-061400-m04"
	I0409 00:13:07.846816       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-061400-m03"
	I0409 00:13:58.302770       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-061400"
	I0409 00:16:18.548156       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-061400-m02"
	I0409 00:16:18.548848       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-061400-m04"
	I0409 00:16:18.593137       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-061400-m02"
	I0409 00:16:19.776750       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-061400-m04"
	I0409 00:16:19.826051       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-061400-m04"
	I0409 00:16:19.895045       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-061400-m02"
	I0409 00:16:23.926381       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-061400-m04"
	
	
	==> kube-proxy [231ada308844] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0408 23:49:17.949819       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0408 23:49:18.021892       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.119.206"]
	E0408 23:49:18.026305       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0408 23:49:18.099252       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0408 23:49:18.099424       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0408 23:49:18.099462       1 server_linux.go:170] "Using iptables Proxier"
	I0408 23:49:18.105499       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0408 23:49:18.107446       1 server.go:497] "Version info" version="v1.32.2"
	I0408 23:49:18.107621       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0408 23:49:18.114618       1 config.go:199] "Starting service config controller"
	I0408 23:49:18.115991       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0408 23:49:18.116218       1 config.go:329] "Starting node config controller"
	I0408 23:49:18.116303       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0408 23:49:18.120167       1 config.go:105] "Starting endpoint slice config controller"
	I0408 23:49:18.120207       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0408 23:49:18.216569       1 shared_informer.go:320] Caches are synced for service config
	I0408 23:49:18.216693       1 shared_informer.go:320] Caches are synced for node config
	I0408 23:49:18.221130       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [327b3e42a6db] <==
	E0408 23:49:08.088215       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0408 23:49:08.166036       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0408 23:49:08.166177       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0408 23:49:08.166378       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0408 23:49:08.166482       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0408 23:49:10.132153       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0408 23:57:57.037950       1 cache.go:504] "Pod was added to a different node than it was assumed" podKey="9db65570-aafe-4092-9a0e-365b7d2881f6" pod="default/busybox-58667487b6-rxp4w" assumedNode="ha-061400-m03" currentNode="ha-061400-m02"
	E0408 23:57:57.044544       1 framework.go:1316] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-58667487b6-rxp4w\": pod busybox-58667487b6-rxp4w is already assigned to node \"ha-061400-m03\"" plugin="DefaultBinder" pod="default/busybox-58667487b6-rxp4w" node="ha-061400-m02"
	E0408 23:57:57.050241       1 schedule_one.go:359] "scheduler cache ForgetPod failed" err="pod 9db65570-aafe-4092-9a0e-365b7d2881f6(default/busybox-58667487b6-rxp4w) was assumed on ha-061400-m02 but assigned to ha-061400-m03" pod="default/busybox-58667487b6-rxp4w"
	E0408 23:57:57.050450       1 schedule_one.go:1058] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-58667487b6-rxp4w\": pod busybox-58667487b6-rxp4w is already assigned to node \"ha-061400-m03\"" pod="default/busybox-58667487b6-rxp4w"
	I0408 23:57:57.050504       1 schedule_one.go:1071] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-58667487b6-rxp4w" node="ha-061400-m03"
	E0409 00:02:22.209566       1 framework.go:1316] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-vx64b\": pod kube-proxy-vx64b is already assigned to node \"ha-061400-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-vx64b" node="ha-061400-m04"
	E0409 00:02:22.209705       1 schedule_one.go:1058] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-vx64b\": pod kube-proxy-vx64b is already assigned to node \"ha-061400-m04\"" pod="kube-system/kube-proxy-vx64b"
	E0409 00:02:22.229289       1 framework.go:1316] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-sfdwl\": pod kindnet-sfdwl is already assigned to node \"ha-061400-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-sfdwl" node="ha-061400-m04"
	E0409 00:02:22.229370       1 schedule_one.go:359] "scheduler cache ForgetPod failed" err="pod 0ec47546-edfc-4aee-bce2-0869e66ee852(kube-system/kindnet-sfdwl) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-sfdwl"
	E0409 00:02:22.229524       1 schedule_one.go:1058] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-sfdwl\": pod kindnet-sfdwl is already assigned to node \"ha-061400-m04\"" pod="kube-system/kindnet-sfdwl"
	I0409 00:02:22.229609       1 schedule_one.go:1071] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-sfdwl" node="ha-061400-m04"
	E0409 00:02:22.286443       1 framework.go:1316] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-blz8v\": pod kube-proxy-blz8v is already assigned to node \"ha-061400-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-blz8v" node="ha-061400-m04"
	E0409 00:02:22.287078       1 schedule_one.go:1058] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-blz8v\": pod kube-proxy-blz8v is already assigned to node \"ha-061400-m04\"" pod="kube-system/kube-proxy-blz8v"
	E0409 00:02:22.344568       1 framework.go:1316] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-qzj76\": pod kindnet-qzj76 is already assigned to node \"ha-061400-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-qzj76" node="ha-061400-m04"
	E0409 00:02:22.344643       1 schedule_one.go:359] "scheduler cache ForgetPod failed" err="pod 8f606204-dbc1-4bdc-9156-1134232b7db2(kube-system/kindnet-qzj76) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-qzj76"
	E0409 00:02:22.344727       1 schedule_one.go:1058] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-qzj76\": pod kindnet-qzj76 is already assigned to node \"ha-061400-m04\"" pod="kube-system/kindnet-qzj76"
	I0409 00:02:22.344827       1 schedule_one.go:1071] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-qzj76" node="ha-061400-m04"
	E0409 00:02:22.345320       1 framework.go:1316] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-l8fgj\": pod kube-proxy-l8fgj is already assigned to node \"ha-061400-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-l8fgj" node="ha-061400-m04"
	E0409 00:02:22.345500       1 schedule_one.go:1058] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-l8fgj\": pod kube-proxy-l8fgj is already assigned to node \"ha-061400-m04\"" pod="kube-system/kube-proxy-l8fgj"
	
	
	==> kubelet <==
	Apr 09 00:12:11 ha-061400 kubelet[2389]: E0409 00:12:11.324990    2389 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 09 00:12:11 ha-061400 kubelet[2389]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 09 00:12:11 ha-061400 kubelet[2389]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 09 00:12:11 ha-061400 kubelet[2389]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 09 00:12:11 ha-061400 kubelet[2389]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 09 00:13:11 ha-061400 kubelet[2389]: E0409 00:13:11.326345    2389 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 09 00:13:11 ha-061400 kubelet[2389]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 09 00:13:11 ha-061400 kubelet[2389]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 09 00:13:11 ha-061400 kubelet[2389]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 09 00:13:11 ha-061400 kubelet[2389]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 09 00:14:11 ha-061400 kubelet[2389]: E0409 00:14:11.327989    2389 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 09 00:14:11 ha-061400 kubelet[2389]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 09 00:14:11 ha-061400 kubelet[2389]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 09 00:14:11 ha-061400 kubelet[2389]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 09 00:14:11 ha-061400 kubelet[2389]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 09 00:15:11 ha-061400 kubelet[2389]: E0409 00:15:11.324456    2389 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 09 00:15:11 ha-061400 kubelet[2389]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 09 00:15:11 ha-061400 kubelet[2389]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 09 00:15:11 ha-061400 kubelet[2389]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 09 00:15:11 ha-061400 kubelet[2389]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 09 00:16:11 ha-061400 kubelet[2389]: E0409 00:16:11.322581    2389 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 09 00:16:11 ha-061400 kubelet[2389]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 09 00:16:11 ha-061400 kubelet[2389]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 09 00:16:11 ha-061400 kubelet[2389]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 09 00:16:11 ha-061400 kubelet[2389]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-061400 -n ha-061400
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-061400 -n ha-061400: (12.2380206s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-061400 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (84.72s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (58.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-611500 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-611500 -- exec busybox-58667487b6-c426d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-611500 -- exec busybox-58667487b6-c426d -- sh -c "ping -c 1 192.168.112.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-611500 -- exec busybox-58667487b6-c426d -- sh -c "ping -c 1 192.168.112.1": exit status 1 (10.4610726s)

                                                
                                                
-- stdout --
	PING 192.168.112.1 (192.168.112.1): 56 data bytes
	
	--- 192.168.112.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (192.168.112.1) from pod (busybox-58667487b6-c426d): exit status 1
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-611500 -- exec busybox-58667487b6-q97dd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-611500 -- exec busybox-58667487b6-q97dd -- sh -c "ping -c 1 192.168.112.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-611500 -- exec busybox-58667487b6-q97dd -- sh -c "ping -c 1 192.168.112.1": exit status 1 (10.4859984s)

                                                
                                                
-- stdout --
	PING 192.168.112.1 (192.168.112.1): 56 data bytes
	
	--- 192.168.112.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (192.168.112.1) from pod (busybox-58667487b6-q97dd): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-611500 -n multinode-611500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-611500 -n multinode-611500: (11.9865321s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-611500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-611500 logs -n 25: (8.8008951s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-936300 ssh -- ls                    | mount-start-2-936300 | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:42 UTC | 09 Apr 25 00:42 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-1-936300                           | mount-start-1-936300 | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:42 UTC | 09 Apr 25 00:43 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-936300 ssh -- ls                    | mount-start-2-936300 | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:43 UTC | 09 Apr 25 00:43 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-936300                           | mount-start-2-936300 | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:43 UTC | 09 Apr 25 00:43 UTC |
	| start   | -p mount-start-2-936300                           | mount-start-2-936300 | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:43 UTC | 09 Apr 25 00:45 UTC |
	| mount   | C:\Users\jenkins.minikube6:/minikube-host         | mount-start-2-936300 | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:45 UTC |                     |
	|         | --profile mount-start-2-936300 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-936300 ssh -- ls                    | mount-start-2-936300 | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:45 UTC | 09 Apr 25 00:45 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-936300                           | mount-start-2-936300 | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:45 UTC | 09 Apr 25 00:46 UTC |
	| delete  | -p mount-start-1-936300                           | mount-start-1-936300 | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:46 UTC | 09 Apr 25 00:46 UTC |
	| start   | -p multinode-611500                               | multinode-611500     | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:46 UTC | 09 Apr 25 00:53 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-611500 -- apply -f                   | multinode-611500     | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:53 UTC | 09 Apr 25 00:53 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-611500 -- rollout                    | multinode-611500     | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:53 UTC | 09 Apr 25 00:53 UTC |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-611500 -- get pods -o                | multinode-611500     | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:53 UTC | 09 Apr 25 00:53 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-611500 -- get pods -o                | multinode-611500     | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:53 UTC | 09 Apr 25 00:53 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-611500 -- exec                       | multinode-611500     | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:53 UTC | 09 Apr 25 00:53 UTC |
	|         | busybox-58667487b6-c426d --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-611500 -- exec                       | multinode-611500     | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:53 UTC | 09 Apr 25 00:53 UTC |
	|         | busybox-58667487b6-q97dd --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-611500 -- exec                       | multinode-611500     | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:53 UTC | 09 Apr 25 00:53 UTC |
	|         | busybox-58667487b6-c426d --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-611500 -- exec                       | multinode-611500     | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:53 UTC | 09 Apr 25 00:53 UTC |
	|         | busybox-58667487b6-q97dd --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-611500 -- exec                       | multinode-611500     | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:53 UTC | 09 Apr 25 00:53 UTC |
	|         | busybox-58667487b6-c426d -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-611500 -- exec                       | multinode-611500     | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:53 UTC | 09 Apr 25 00:53 UTC |
	|         | busybox-58667487b6-q97dd -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-611500 -- get pods -o                | multinode-611500     | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:53 UTC | 09 Apr 25 00:53 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-611500 -- exec                       | multinode-611500     | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:53 UTC | 09 Apr 25 00:53 UTC |
	|         | busybox-58667487b6-c426d                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-611500 -- exec                       | multinode-611500     | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:53 UTC |                     |
	|         | busybox-58667487b6-c426d -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 192.168.112.1                        |                      |                   |         |                     |                     |
	| kubectl | -p multinode-611500 -- exec                       | multinode-611500     | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:53 UTC | 09 Apr 25 00:53 UTC |
	|         | busybox-58667487b6-q97dd                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-611500 -- exec                       | multinode-611500     | minikube6\jenkins | v1.35.0 | 09 Apr 25 00:53 UTC |                     |
	|         | busybox-58667487b6-q97dd -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 192.168.112.1                        |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/09 00:46:17
	Running on machine: minikube6
	Binary: Built with gc go1.24.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0409 00:46:17.773790    2144 out.go:345] Setting OutFile to fd 1324 ...
	I0409 00:46:17.847951    2144 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0409 00:46:17.848144    2144 out.go:358] Setting ErrFile to fd 1252...
	I0409 00:46:17.848190    2144 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0409 00:46:17.869325    2144 out.go:352] Setting JSON to false
	I0409 00:46:17.872377    2144 start.go:129] hostinfo: {"hostname":"minikube6","uptime":16575,"bootTime":1744143002,"procs":178,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5679 Build 19045.5679","kernelVersion":"10.0.19045.5679 Build 19045.5679","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0409 00:46:17.872377    2144 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0409 00:46:17.877942    2144 out.go:177] * [multinode-611500] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
	I0409 00:46:17.883034    2144 notify.go:220] Checking for updates...
	I0409 00:46:17.885796    2144 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0409 00:46:17.888441    2144 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0409 00:46:17.891267    2144 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0409 00:46:17.895072    2144 out.go:177]   - MINIKUBE_LOCATION=20501
	I0409 00:46:17.897935    2144 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0409 00:46:17.903086    2144 config.go:182] Loaded profile config "ha-061400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0409 00:46:17.903598    2144 driver.go:404] Setting default libvirt URI to qemu:///system
	I0409 00:46:23.077929    2144 out.go:177] * Using the hyperv driver based on user configuration
	I0409 00:46:23.085475    2144 start.go:297] selected driver: hyperv
	I0409 00:46:23.085475    2144 start.go:901] validating driver "hyperv" against <nil>
	I0409 00:46:23.085475    2144 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0409 00:46:23.133636    2144 start_flags.go:311] no existing cluster config was found, will generate one from the flags 
	I0409 00:46:23.134921    2144 start_flags.go:975] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0409 00:46:23.134921    2144 cni.go:84] Creating CNI manager for ""
	I0409 00:46:23.134921    2144 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0409 00:46:23.134921    2144 start_flags.go:320] Found "CNI" CNI - setting NetworkPlugin=cni
	I0409 00:46:23.134921    2144 start.go:340] cluster config:
	{Name:multinode-611500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:multinode-611500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0409 00:46:23.135547    2144 iso.go:125] acquiring lock: {Name:mk49322cc4182124f5e9cd1631076166a7ff463d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0409 00:46:23.141338    2144 out.go:177] * Starting "multinode-611500" primary control-plane node in "multinode-611500" cluster
	I0409 00:46:23.145395    2144 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0409 00:46:23.145395    2144 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
	I0409 00:46:23.145395    2144 cache.go:56] Caching tarball of preloaded images
	I0409 00:46:23.145395    2144 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0409 00:46:23.146334    2144 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0409 00:46:23.146626    2144 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\config.json ...
	I0409 00:46:23.146783    2144 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\config.json: {Name:mk1b316a3e25e64b1ecbaf30db7f609a40471a87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 00:46:23.148121    2144 start.go:360] acquireMachinesLock for multinode-611500: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0409 00:46:23.148121    2144 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-611500"
	I0409 00:46:23.148570    2144 start.go:93] Provisioning new machine with config: &{Name:multinode-611500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 Clus
terName:multinode-611500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0409 00:46:23.148570    2144 start.go:125] createHost starting for "" (driver="hyperv")
	I0409 00:46:23.154387    2144 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0409 00:46:23.154387    2144 start.go:159] libmachine.API.Create for "multinode-611500" (driver="hyperv")
	I0409 00:46:23.154387    2144 client.go:168] LocalClient.Create starting
	I0409 00:46:23.155368    2144 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0409 00:46:23.155368    2144 main.go:141] libmachine: Decoding PEM data...
	I0409 00:46:23.155368    2144 main.go:141] libmachine: Parsing certificate...
	I0409 00:46:23.155368    2144 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0409 00:46:23.155368    2144 main.go:141] libmachine: Decoding PEM data...
	I0409 00:46:23.155368    2144 main.go:141] libmachine: Parsing certificate...
	I0409 00:46:23.156208    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0409 00:46:25.178506    2144 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0409 00:46:25.178506    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:46:25.179467    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0409 00:46:26.862341    2144 main.go:141] libmachine: [stdout =====>] : False
	
	I0409 00:46:26.863160    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:46:26.863253    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0409 00:46:28.310030    2144 main.go:141] libmachine: [stdout =====>] : True
	
	I0409 00:46:28.311063    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:46:28.311160    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0409 00:46:31.958397    2144 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0409 00:46:31.959550    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:46:31.961663    2144 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0409 00:46:32.415422    2144 main.go:141] libmachine: Creating SSH key...
	I0409 00:46:32.784693    2144 main.go:141] libmachine: Creating VM...
	I0409 00:46:32.784693    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0409 00:46:35.625838    2144 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0409 00:46:35.625838    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:46:35.626876    2144 main.go:141] libmachine: Using switch "Default Switch"
	I0409 00:46:35.627000    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0409 00:46:37.336218    2144 main.go:141] libmachine: [stdout =====>] : True
	
	I0409 00:46:37.336218    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:46:37.336218    2144 main.go:141] libmachine: Creating VHD
	I0409 00:46:37.337075    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-611500\fixed.vhd' -SizeBytes 10MB -Fixed
	I0409 00:46:41.129213    2144 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-611500\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : C290A6A8-18A9-4096-AC55-9FBC662CEB3E
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0409 00:46:41.130003    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:46:41.130153    2144 main.go:141] libmachine: Writing magic tar header
	I0409 00:46:41.130321    2144 main.go:141] libmachine: Writing SSH key tar header
	I0409 00:46:41.144203    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-611500\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-611500\disk.vhd' -VHDType Dynamic -DeleteSource
	I0409 00:46:44.350676    2144 main.go:141] libmachine: [stdout =====>] : 
	I0409 00:46:44.351574    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:46:44.351574    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-611500\disk.vhd' -SizeBytes 20000MB
	I0409 00:46:46.896265    2144 main.go:141] libmachine: [stdout =====>] : 
	I0409 00:46:46.896265    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:46:46.896265    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-611500 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-611500' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0409 00:46:50.389080    2144 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-611500 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0409 00:46:50.389843    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:46:50.389843    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-611500 -DynamicMemoryEnabled $false
	I0409 00:46:52.600454    2144 main.go:141] libmachine: [stdout =====>] : 
	I0409 00:46:52.600454    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:46:52.600454    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-611500 -Count 2
	I0409 00:46:54.752859    2144 main.go:141] libmachine: [stdout =====>] : 
	I0409 00:46:54.753749    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:46:54.753749    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-611500 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-611500\boot2docker.iso'
	I0409 00:46:57.261796    2144 main.go:141] libmachine: [stdout =====>] : 
	I0409 00:46:57.262964    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:46:57.262964    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-611500 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-611500\disk.vhd'
	I0409 00:46:59.884028    2144 main.go:141] libmachine: [stdout =====>] : 
	I0409 00:46:59.884284    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:46:59.884284    2144 main.go:141] libmachine: Starting VM...
	I0409 00:46:59.884284    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-611500
	I0409 00:47:02.959919    2144 main.go:141] libmachine: [stdout =====>] : 
	I0409 00:47:02.959919    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:47:02.960939    2144 main.go:141] libmachine: Waiting for host to start...
	I0409 00:47:02.960975    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 00:47:05.180741    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:47:05.181566    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:47:05.181566    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 00:47:07.621133    2144 main.go:141] libmachine: [stdout =====>] : 
	I0409 00:47:07.621133    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:47:08.621497    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 00:47:10.824746    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:47:10.825144    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:47:10.825324    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 00:47:13.360637    2144 main.go:141] libmachine: [stdout =====>] : 
	I0409 00:47:13.360802    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:47:14.361390    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 00:47:16.521682    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:47:16.522368    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:47:16.522434    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 00:47:18.991573    2144 main.go:141] libmachine: [stdout =====>] : 
	I0409 00:47:18.991912    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:47:19.992666    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 00:47:22.156839    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:47:22.156839    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:47:22.157365    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 00:47:24.665660    2144 main.go:141] libmachine: [stdout =====>] : 
	I0409 00:47:24.665713    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:47:25.666543    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 00:47:27.858214    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:47:27.858278    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:47:27.858342    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 00:47:30.433108    2144 main.go:141] libmachine: [stdout =====>] : 192.168.113.157
	
	I0409 00:47:30.433108    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:47:30.433389    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 00:47:32.553858    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:47:32.554612    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:47:32.554944    2144 machine.go:93] provisionDockerMachine start ...
	I0409 00:47:32.554944    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 00:47:34.681397    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:47:34.681397    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:47:34.682426    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 00:47:37.152402    2144 main.go:141] libmachine: [stdout =====>] : 192.168.113.157
	
	I0409 00:47:37.152402    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:47:37.158144    2144 main.go:141] libmachine: Using SSH client type: native
	I0409 00:47:37.173395    2144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.157 22 <nil> <nil>}
	I0409 00:47:37.173395    2144 main.go:141] libmachine: About to run SSH command:
	hostname
	I0409 00:47:37.315359    2144 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0409 00:47:37.315468    2144 buildroot.go:166] provisioning hostname "multinode-611500"
	I0409 00:47:37.315468    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 00:47:39.434049    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:47:39.434909    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:47:39.434909    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 00:47:41.913527    2144 main.go:141] libmachine: [stdout =====>] : 192.168.113.157
	
	I0409 00:47:41.914495    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:47:41.919974    2144 main.go:141] libmachine: Using SSH client type: native
	I0409 00:47:41.920969    2144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.157 22 <nil> <nil>}
	I0409 00:47:41.921068    2144 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-611500 && echo "multinode-611500" | sudo tee /etc/hostname
	I0409 00:47:42.077188    2144 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-611500
	
	I0409 00:47:42.077323    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 00:47:44.155638    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:47:44.155638    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:47:44.155638    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 00:47:46.573512    2144 main.go:141] libmachine: [stdout =====>] : 192.168.113.157
	
	I0409 00:47:46.574083    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:47:46.579274    2144 main.go:141] libmachine: Using SSH client type: native
	I0409 00:47:46.579942    2144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.157 22 <nil> <nil>}
	I0409 00:47:46.579942    2144 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-611500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-611500/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-611500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0409 00:47:46.743338    2144 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0409 00:47:46.743524    2144 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0409 00:47:46.743524    2144 buildroot.go:174] setting up certificates
	I0409 00:47:46.743524    2144 provision.go:84] configureAuth start
	I0409 00:47:46.743524    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 00:47:48.855663    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:47:48.856289    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:47:48.856583    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 00:47:51.357541    2144 main.go:141] libmachine: [stdout =====>] : 192.168.113.157
	
	I0409 00:47:51.357541    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:47:51.358241    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 00:47:53.506032    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:47:53.506032    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:47:53.506370    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 00:47:55.947205    2144 main.go:141] libmachine: [stdout =====>] : 192.168.113.157
	
	I0409 00:47:55.947205    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:47:55.947205    2144 provision.go:143] copyHostCerts
	I0409 00:47:55.947205    2144 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0409 00:47:55.947205    2144 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0409 00:47:55.947205    2144 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0409 00:47:55.948116    2144 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0409 00:47:55.949433    2144 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0409 00:47:55.949547    2144 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0409 00:47:55.949547    2144 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0409 00:47:55.949547    2144 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0409 00:47:55.951032    2144 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0409 00:47:55.951032    2144 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0409 00:47:55.951032    2144 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0409 00:47:55.951772    2144 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0409 00:47:55.953330    2144 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-611500 san=[127.0.0.1 192.168.113.157 localhost minikube multinode-611500]
	I0409 00:47:56.139990    2144 provision.go:177] copyRemoteCerts
	I0409 00:47:56.152041    2144 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0409 00:47:56.152041    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 00:47:58.224331    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:47:58.224331    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:47:58.224534    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 00:48:00.687663    2144 main.go:141] libmachine: [stdout =====>] : 192.168.113.157
	
	I0409 00:48:00.688272    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:48:00.688569    2144 sshutil.go:53] new ssh client: &{IP:192.168.113.157 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-611500\id_rsa Username:docker}
	I0409 00:48:00.798147    2144 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6460438s)
	I0409 00:48:00.798147    2144 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0409 00:48:00.798147    2144 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0409 00:48:00.841714    2144 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0409 00:48:00.841907    2144 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0409 00:48:00.884632    2144 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0409 00:48:00.884806    2144 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0409 00:48:00.926889    2144 provision.go:87] duration metric: took 14.183178s to configureAuth
	I0409 00:48:00.926889    2144 buildroot.go:189] setting minikube options for container-runtime
	I0409 00:48:00.927697    2144 config.go:182] Loaded profile config "multinode-611500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0409 00:48:00.927801    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 00:48:03.058812    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:48:03.058812    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:48:03.058935    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 00:48:05.562266    2144 main.go:141] libmachine: [stdout =====>] : 192.168.113.157
	
	I0409 00:48:05.562996    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:48:05.568913    2144 main.go:141] libmachine: Using SSH client type: native
	I0409 00:48:05.569042    2144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.157 22 <nil> <nil>}
	I0409 00:48:05.569042    2144 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0409 00:48:05.703615    2144 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0409 00:48:05.703725    2144 buildroot.go:70] root file system type: tmpfs
	I0409 00:48:05.703837    2144 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0409 00:48:05.704080    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 00:48:07.765715    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:48:07.765793    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:48:07.765793    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 00:48:10.232470    2144 main.go:141] libmachine: [stdout =====>] : 192.168.113.157
	
	I0409 00:48:10.233049    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:48:10.239692    2144 main.go:141] libmachine: Using SSH client type: native
	I0409 00:48:10.240270    2144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.157 22 <nil> <nil>}
	I0409 00:48:10.240270    2144 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0409 00:48:10.406768    2144 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0409 00:48:10.406889    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 00:48:12.517617    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:48:12.517857    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:48:12.517954    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 00:48:15.039004    2144 main.go:141] libmachine: [stdout =====>] : 192.168.113.157
	
	I0409 00:48:15.039004    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:48:15.045774    2144 main.go:141] libmachine: Using SSH client type: native
	I0409 00:48:15.046431    2144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.157 22 <nil> <nil>}
	I0409 00:48:15.046492    2144 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0409 00:48:17.246761    2144 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0409 00:48:17.246761    2144 machine.go:96] duration metric: took 44.6912268s to provisionDockerMachine
	I0409 00:48:17.246761    2144 client.go:171] duration metric: took 1m54.0908608s to LocalClient.Create
	I0409 00:48:17.246761    2144 start.go:167] duration metric: took 1m54.0908608s to libmachine.API.Create "multinode-611500"
	I0409 00:48:17.246761    2144 start.go:293] postStartSetup for "multinode-611500" (driver="hyperv")
	I0409 00:48:17.246761    2144 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0409 00:48:17.261005    2144 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0409 00:48:17.261005    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 00:48:19.338899    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:48:19.339045    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:48:19.339045    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 00:48:21.796587    2144 main.go:141] libmachine: [stdout =====>] : 192.168.113.157
	
	I0409 00:48:21.797140    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:48:21.797140    2144 sshutil.go:53] new ssh client: &{IP:192.168.113.157 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-611500\id_rsa Username:docker}
	I0409 00:48:21.901437    2144 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6402859s)
	I0409 00:48:21.913196    2144 ssh_runner.go:195] Run: cat /etc/os-release
	I0409 00:48:21.917785    2144 command_runner.go:130] > NAME=Buildroot
	I0409 00:48:21.917785    2144 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0409 00:48:21.917785    2144 command_runner.go:130] > ID=buildroot
	I0409 00:48:21.917785    2144 command_runner.go:130] > VERSION_ID=2023.02.9
	I0409 00:48:21.917785    2144 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0409 00:48:21.917785    2144 info.go:137] Remote host: Buildroot 2023.02.9
	I0409 00:48:21.917785    2144 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0409 00:48:21.917785    2144 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0409 00:48:21.917785    2144 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> 98642.pem in /etc/ssl/certs
	I0409 00:48:21.922117    2144 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> /etc/ssl/certs/98642.pem
	I0409 00:48:21.933295    2144 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0409 00:48:21.952966    2144 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem --> /etc/ssl/certs/98642.pem (1708 bytes)
	I0409 00:48:21.997711    2144 start.go:296] duration metric: took 4.7508872s for postStartSetup
	I0409 00:48:22.001611    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 00:48:24.137286    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:48:24.137286    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:48:24.137471    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 00:48:26.629448    2144 main.go:141] libmachine: [stdout =====>] : 192.168.113.157
	
	I0409 00:48:26.629562    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:48:26.629702    2144 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\config.json ...
	I0409 00:48:26.632806    2144 start.go:128] duration metric: took 2m3.4825997s to createHost
	I0409 00:48:26.632806    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 00:48:28.705732    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:48:28.706543    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:48:28.706543    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 00:48:31.208132    2144 main.go:141] libmachine: [stdout =====>] : 192.168.113.157
	
	I0409 00:48:31.208132    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:48:31.213611    2144 main.go:141] libmachine: Using SSH client type: native
	I0409 00:48:31.214085    2144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.157 22 <nil> <nil>}
	I0409 00:48:31.214085    2144 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0409 00:48:31.365283    2144 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744159711.382909125
	
	I0409 00:48:31.365397    2144 fix.go:216] guest clock: 1744159711.382909125
	I0409 00:48:31.365397    2144 fix.go:229] Guest: 2025-04-09 00:48:31.382909125 +0000 UTC Remote: 2025-04-09 00:48:26.6328069 +0000 UTC m=+128.941709701 (delta=4.750102225s)
	I0409 00:48:31.365500    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 00:48:33.382148    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:48:33.382148    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:48:33.382148    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 00:48:35.856789    2144 main.go:141] libmachine: [stdout =====>] : 192.168.113.157
	
	I0409 00:48:35.857561    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:48:35.864501    2144 main.go:141] libmachine: Using SSH client type: native
	I0409 00:48:35.864688    2144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.157 22 <nil> <nil>}
	I0409 00:48:35.865237    2144 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1744159711
	I0409 00:48:36.008499    2144 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Apr  9 00:48:31 UTC 2025
	
	I0409 00:48:36.008579    2144 fix.go:236] clock set: Wed Apr  9 00:48:31 UTC 2025
	 (err=<nil>)
	I0409 00:48:36.008579    2144 start.go:83] releasing machines lock for "multinode-611500", held for 2m12.858328s
	I0409 00:48:36.008807    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 00:48:38.146280    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:48:38.146895    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:48:38.146895    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 00:48:40.609533    2144 main.go:141] libmachine: [stdout =====>] : 192.168.113.157
	
	I0409 00:48:40.609533    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:48:40.614704    2144 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0409 00:48:40.615241    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 00:48:40.624416    2144 ssh_runner.go:195] Run: cat /version.json
	I0409 00:48:40.624416    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 00:48:42.817160    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:48:42.817160    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:48:42.817677    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 00:48:42.821442    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:48:42.821482    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:48:42.821651    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 00:48:45.442432    2144 main.go:141] libmachine: [stdout =====>] : 192.168.113.157
	
	I0409 00:48:45.443406    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:48:45.443569    2144 sshutil.go:53] new ssh client: &{IP:192.168.113.157 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-611500\id_rsa Username:docker}
	I0409 00:48:45.466351    2144 main.go:141] libmachine: [stdout =====>] : 192.168.113.157
	
	I0409 00:48:45.466351    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:48:45.466499    2144 sshutil.go:53] new ssh client: &{IP:192.168.113.157 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-611500\id_rsa Username:docker}
	I0409 00:48:45.537186    2144 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0409 00:48:45.537663    2144 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9228463s)
	W0409 00:48:45.537663    2144 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0409 00:48:45.571278    2144 command_runner.go:130] > {"iso_version": "v1.35.0", "kicbase_version": "v0.0.45-1736763277-20236", "minikube_version": "v1.35.0", "commit": "3fb24bd87c8c8761e2515e1a9ee13835a389ed68"}
	I0409 00:48:45.571934    2144 ssh_runner.go:235] Completed: cat /version.json: (4.9474526s)
	I0409 00:48:45.583256    2144 ssh_runner.go:195] Run: systemctl --version
	I0409 00:48:45.591908    2144 command_runner.go:130] > systemd 252 (252)
	I0409 00:48:45.591968    2144 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0409 00:48:45.603341    2144 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0409 00:48:45.611560    2144 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0409 00:48:45.612589    2144 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0409 00:48:45.623675    2144 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0409 00:48:45.650621    2144 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0409 00:48:45.650981    2144 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0409 00:48:45.650981    2144 start.go:495] detecting cgroup driver to use...
	I0409 00:48:45.651322    2144 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0409 00:48:45.657462    2144 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0409 00:48:45.657462    2144 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0409 00:48:45.695939    2144 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0409 00:48:45.706444    2144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0409 00:48:45.735348    2144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0409 00:48:45.753479    2144 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0409 00:48:45.764934    2144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0409 00:48:45.797142    2144 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0409 00:48:45.826475    2144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0409 00:48:45.859115    2144 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0409 00:48:45.888841    2144 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0409 00:48:45.918221    2144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0409 00:48:45.946687    2144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0409 00:48:45.977366    2144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0409 00:48:46.005859    2144 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0409 00:48:46.022455    2144 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0409 00:48:46.023397    2144 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0409 00:48:46.034626    2144 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0409 00:48:46.066497    2144 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0409 00:48:46.096154    2144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0409 00:48:46.287973    2144 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0409 00:48:46.319482    2144 start.go:495] detecting cgroup driver to use...
	I0409 00:48:46.332719    2144 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0409 00:48:46.354991    2144 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0409 00:48:46.355116    2144 command_runner.go:130] > [Unit]
	I0409 00:48:46.355116    2144 command_runner.go:130] > Description=Docker Application Container Engine
	I0409 00:48:46.355116    2144 command_runner.go:130] > Documentation=https://docs.docker.com
	I0409 00:48:46.355189    2144 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0409 00:48:46.355189    2144 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0409 00:48:46.355189    2144 command_runner.go:130] > StartLimitBurst=3
	I0409 00:48:46.355189    2144 command_runner.go:130] > StartLimitIntervalSec=60
	I0409 00:48:46.355189    2144 command_runner.go:130] > [Service]
	I0409 00:48:46.355189    2144 command_runner.go:130] > Type=notify
	I0409 00:48:46.355189    2144 command_runner.go:130] > Restart=on-failure
	I0409 00:48:46.355189    2144 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0409 00:48:46.355189    2144 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0409 00:48:46.355189    2144 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0409 00:48:46.355189    2144 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0409 00:48:46.355358    2144 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0409 00:48:46.355358    2144 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0409 00:48:46.355358    2144 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0409 00:48:46.355358    2144 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0409 00:48:46.355474    2144 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0409 00:48:46.355474    2144 command_runner.go:130] > ExecStart=
	I0409 00:48:46.355474    2144 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0409 00:48:46.355540    2144 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0409 00:48:46.355540    2144 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0409 00:48:46.355540    2144 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0409 00:48:46.355540    2144 command_runner.go:130] > LimitNOFILE=infinity
	I0409 00:48:46.355540    2144 command_runner.go:130] > LimitNPROC=infinity
	I0409 00:48:46.355608    2144 command_runner.go:130] > LimitCORE=infinity
	I0409 00:48:46.355608    2144 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0409 00:48:46.355608    2144 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0409 00:48:46.355608    2144 command_runner.go:130] > TasksMax=infinity
	I0409 00:48:46.355608    2144 command_runner.go:130] > TimeoutStartSec=0
	I0409 00:48:46.355664    2144 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0409 00:48:46.355664    2144 command_runner.go:130] > Delegate=yes
	I0409 00:48:46.355664    2144 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0409 00:48:46.355664    2144 command_runner.go:130] > KillMode=process
	I0409 00:48:46.355664    2144 command_runner.go:130] > [Install]
	I0409 00:48:46.355664    2144 command_runner.go:130] > WantedBy=multi-user.target
	I0409 00:48:46.366777    2144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0409 00:48:46.403452    2144 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0409 00:48:46.449676    2144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0409 00:48:46.486204    2144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0409 00:48:46.527223    2144 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0409 00:48:46.597389    2144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0409 00:48:46.619268    2144 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0409 00:48:46.648800    2144 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0409 00:48:46.660851    2144 ssh_runner.go:195] Run: which cri-dockerd
	I0409 00:48:46.666609    2144 command_runner.go:130] > /usr/bin/cri-dockerd
	I0409 00:48:46.678272    2144 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0409 00:48:46.694969    2144 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0409 00:48:46.737617    2144 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0409 00:48:46.931537    2144 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0409 00:48:47.109520    2144 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0409 00:48:47.109948    2144 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0409 00:48:47.160185    2144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0409 00:48:47.350547    2144 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0409 00:48:49.915162    2144 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5645816s)
	I0409 00:48:49.926121    2144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0409 00:48:49.958308    2144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0409 00:48:49.989090    2144 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0409 00:48:50.166105    2144 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0409 00:48:50.348585    2144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0409 00:48:50.542265    2144 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0409 00:48:50.576688    2144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0409 00:48:50.609684    2144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0409 00:48:50.800504    2144 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0409 00:48:50.898350    2144 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0409 00:48:50.908749    2144 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0409 00:48:50.917997    2144 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0409 00:48:50.917997    2144 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0409 00:48:50.917997    2144 command_runner.go:130] > Device: 0,22	Inode: 872         Links: 1
	I0409 00:48:50.917997    2144 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0409 00:48:50.917997    2144 command_runner.go:130] > Access: 2025-04-09 00:48:50.845776986 +0000
	I0409 00:48:50.918154    2144 command_runner.go:130] > Modify: 2025-04-09 00:48:50.845776986 +0000
	I0409 00:48:50.918154    2144 command_runner.go:130] > Change: 2025-04-09 00:48:50.848777016 +0000
	I0409 00:48:50.918154    2144 command_runner.go:130] >  Birth: -
	I0409 00:48:50.918242    2144 start.go:563] Will wait 60s for crictl version
	I0409 00:48:50.927792    2144 ssh_runner.go:195] Run: which crictl
	I0409 00:48:50.932822    2144 command_runner.go:130] > /usr/bin/crictl
	I0409 00:48:50.942555    2144 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0409 00:48:50.988394    2144 command_runner.go:130] > Version:  0.1.0
	I0409 00:48:50.988394    2144 command_runner.go:130] > RuntimeName:  docker
	I0409 00:48:50.988394    2144 command_runner.go:130] > RuntimeVersion:  27.4.0
	I0409 00:48:50.988394    2144 command_runner.go:130] > RuntimeApiVersion:  v1
	I0409 00:48:50.988394    2144 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0409 00:48:50.996695    2144 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0409 00:48:51.023256    2144 command_runner.go:130] > 27.4.0
	I0409 00:48:51.032241    2144 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0409 00:48:51.067428    2144 command_runner.go:130] > 27.4.0
	I0409 00:48:51.072881    2144 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 27.4.0 ...
	I0409 00:48:51.072881    2144 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0409 00:48:51.076868    2144 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0409 00:48:51.076868    2144 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0409 00:48:51.076868    2144 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0409 00:48:51.076868    2144 ip.go:211] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:f4:da:75 Flags:up|broadcast|multicast|running}
	I0409 00:48:51.079869    2144 ip.go:214] interface addr: fe80::e8ab:9cc6:22b1:a5fc/64
	I0409 00:48:51.079869    2144 ip.go:214] interface addr: 192.168.112.1/20
	I0409 00:48:51.088912    2144 ssh_runner.go:195] Run: grep 192.168.112.1	host.minikube.internal$ /etc/hosts
	I0409 00:48:51.095500    2144 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.112.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0409 00:48:51.114336    2144 kubeadm.go:883] updating cluster {Name:multinode-611500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:multinode-6
11500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.113.157 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0409 00:48:51.114336    2144 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0409 00:48:51.124284    2144 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0409 00:48:51.146392    2144 docker.go:689] Got preloaded images: 
	I0409 00:48:51.146392    2144 docker.go:695] registry.k8s.io/kube-apiserver:v1.32.2 wasn't preloaded
	I0409 00:48:51.156415    2144 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0409 00:48:51.173222    2144 command_runner.go:139] > {"Repositories":{}}
	I0409 00:48:51.182170    2144 ssh_runner.go:195] Run: which lz4
	I0409 00:48:51.189584    2144 command_runner.go:130] > /usr/bin/lz4
	I0409 00:48:51.189729    2144 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0409 00:48:51.200326    2144 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0409 00:48:51.205893    2144 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0409 00:48:51.206435    2144 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0409 00:48:51.206596    2144 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (349803115 bytes)
	I0409 00:48:53.779662    2144 docker.go:653] duration metric: took 2.5897066s to copy over tarball
	I0409 00:48:53.789694    2144 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0409 00:49:02.201022    2144 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.4111082s)
	I0409 00:49:02.201044    2144 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0409 00:49:02.261681    2144 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0409 00:49:02.282929    2144 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.3":"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e":"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.16-0":"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5":"sha256:a9e7e6b294baf1695fccb862d95
6c5d3ad8510e1e4ca1535f35dc09f247abbfc"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.32.2":"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef","registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f":"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.32.2":"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389","registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90":"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.32.2":"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5","registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d":"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68f
f49a87c2266ebc5"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.32.2":"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d","registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76":"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.10":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136"}}}
	I0409 00:49:02.283012    2144 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0409 00:49:02.323058    2144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0409 00:49:02.543415    2144 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0409 00:49:05.679155    2144 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.1356515s)
	I0409 00:49:05.689654    2144 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0409 00:49:05.714295    2144 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.32.2
	I0409 00:49:05.714295    2144 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.32.2
	I0409 00:49:05.714295    2144 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.32.2
	I0409 00:49:05.714295    2144 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.32.2
	I0409 00:49:05.714295    2144 command_runner.go:130] > registry.k8s.io/etcd:3.5.16-0
	I0409 00:49:05.714295    2144 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0409 00:49:05.714295    2144 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0409 00:49:05.714295    2144 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0409 00:49:05.714295    2144 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.2
	registry.k8s.io/kube-controller-manager:v1.32.2
	registry.k8s.io/kube-scheduler:v1.32.2
	registry.k8s.io/kube-proxy:v1.32.2
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0409 00:49:05.714295    2144 cache_images.go:84] Images are preloaded, skipping loading
	I0409 00:49:05.714295    2144 kubeadm.go:934] updating node { 192.168.113.157 8443 v1.32.2 docker true true} ...
	I0409 00:49:05.714295    2144 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-611500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.113.157
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:multinode-611500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0409 00:49:05.724238    2144 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0409 00:49:05.788205    2144 command_runner.go:130] > cgroupfs
	I0409 00:49:05.788678    2144 cni.go:84] Creating CNI manager for ""
	I0409 00:49:05.788907    2144 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0409 00:49:05.788907    2144 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0409 00:49:05.789015    2144 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.113.157 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-611500 NodeName:multinode-611500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.113.157"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.113.157 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0409 00:49:05.789161    2144 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.113.157
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-611500"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.113.157"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.113.157"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0409 00:49:05.799490    2144 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0409 00:49:05.816735    2144 command_runner.go:130] > kubeadm
	I0409 00:49:05.816813    2144 command_runner.go:130] > kubectl
	I0409 00:49:05.816813    2144 command_runner.go:130] > kubelet
	I0409 00:49:05.816813    2144 binaries.go:44] Found k8s binaries, skipping transfer
	I0409 00:49:05.827634    2144 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0409 00:49:05.842620    2144 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0409 00:49:05.871843    2144 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0409 00:49:05.900546    2144 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2303 bytes)
	I0409 00:49:05.941203    2144 ssh_runner.go:195] Run: grep 192.168.113.157	control-plane.minikube.internal$ /etc/hosts
	I0409 00:49:05.947608    2144 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.113.157	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0409 00:49:05.975771    2144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0409 00:49:06.157560    2144 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0409 00:49:06.183660    2144 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500 for IP: 192.168.113.157
	I0409 00:49:06.183660    2144 certs.go:194] generating shared ca certs ...
	I0409 00:49:06.183765    2144 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 00:49:06.185122    2144 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0409 00:49:06.185953    2144 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0409 00:49:06.186268    2144 certs.go:256] generating profile certs ...
	I0409 00:49:06.187254    2144 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\client.key
	I0409 00:49:06.187577    2144 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\client.crt with IP's: []
	I0409 00:49:06.405781    2144 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\client.crt ...
	I0409 00:49:06.405781    2144 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\client.crt: {Name:mk6b8aa9881f54fab61c0784c964cd2da99de3ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 00:49:06.407779    2144 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\client.key ...
	I0409 00:49:06.407779    2144 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\client.key: {Name:mk26bcc6ba9e666dc03ac3702a4f6b55f7d638e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 00:49:06.408882    2144 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.key.7fbc414c
	I0409 00:49:06.408882    2144 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.crt.7fbc414c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.113.157]
	I0409 00:49:06.817159    2144 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.crt.7fbc414c ...
	I0409 00:49:06.817159    2144 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.crt.7fbc414c: {Name:mk6773092f25041db08860cefeeb20dcfe08f273 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 00:49:06.818958    2144 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.key.7fbc414c ...
	I0409 00:49:06.819031    2144 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.key.7fbc414c: {Name:mkdb5a10165e68fa05d71ceecf2c4a0a3025ab6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 00:49:06.820226    2144 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.crt.7fbc414c -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.crt
	I0409 00:49:06.834717    2144 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.key.7fbc414c -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.key
	I0409 00:49:06.836122    2144 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\proxy-client.key
	I0409 00:49:06.836348    2144 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\proxy-client.crt with IP's: []
	I0409 00:49:07.385454    2144 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\proxy-client.crt ...
	I0409 00:49:07.385454    2144 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\proxy-client.crt: {Name:mkcc221288180fe6e15f3e024d52cffc0cc9b3c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 00:49:07.387700    2144 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\proxy-client.key ...
	I0409 00:49:07.387700    2144 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\proxy-client.key: {Name:mk31ee73821f14775e4eb3cf8ecdd180e03b64ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 00:49:07.388279    2144 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0409 00:49:07.389384    2144 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0409 00:49:07.389384    2144 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0409 00:49:07.389384    2144 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0409 00:49:07.389384    2144 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0409 00:49:07.389955    2144 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0409 00:49:07.390136    2144 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0409 00:49:07.402460    2144 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0409 00:49:07.403024    2144 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9864.pem (1338 bytes)
	W0409 00:49:07.403688    2144 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9864_empty.pem, impossibly tiny 0 bytes
	I0409 00:49:07.403861    2144 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0409 00:49:07.404238    2144 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0409 00:49:07.404293    2144 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0409 00:49:07.404293    2144 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0409 00:49:07.405497    2144 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem (1708 bytes)
	I0409 00:49:07.405706    2144 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9864.pem -> /usr/share/ca-certificates/9864.pem
	I0409 00:49:07.405706    2144 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> /usr/share/ca-certificates/98642.pem
	I0409 00:49:07.405706    2144 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0409 00:49:07.407422    2144 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0409 00:49:07.458266    2144 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0409 00:49:07.504588    2144 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0409 00:49:07.554207    2144 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0409 00:49:07.596454    2144 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0409 00:49:07.640552    2144 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0409 00:49:07.682604    2144 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0409 00:49:07.728553    2144 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0409 00:49:07.771146    2144 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9864.pem --> /usr/share/ca-certificates/9864.pem (1338 bytes)
	I0409 00:49:07.816574    2144 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem --> /usr/share/ca-certificates/98642.pem (1708 bytes)
	I0409 00:49:07.865330    2144 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0409 00:49:07.915761    2144 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0409 00:49:07.958821    2144 ssh_runner.go:195] Run: openssl version
	I0409 00:49:07.967914    2144 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0409 00:49:07.979091    2144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9864.pem && ln -fs /usr/share/ca-certificates/9864.pem /etc/ssl/certs/9864.pem"
	I0409 00:49:08.008025    2144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9864.pem
	I0409 00:49:08.015867    2144 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr  8 23:04 /usr/share/ca-certificates/9864.pem
	I0409 00:49:08.015867    2144 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 23:04 /usr/share/ca-certificates/9864.pem
	I0409 00:49:08.026692    2144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9864.pem
	I0409 00:49:08.037417    2144 command_runner.go:130] > 51391683
	I0409 00:49:08.051806    2144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9864.pem /etc/ssl/certs/51391683.0"
	I0409 00:49:08.082386    2144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98642.pem && ln -fs /usr/share/ca-certificates/98642.pem /etc/ssl/certs/98642.pem"
	I0409 00:49:08.111597    2144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98642.pem
	I0409 00:49:08.117761    2144 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr  8 23:04 /usr/share/ca-certificates/98642.pem
	I0409 00:49:08.117924    2144 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 23:04 /usr/share/ca-certificates/98642.pem
	I0409 00:49:08.130482    2144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98642.pem
	I0409 00:49:08.138617    2144 command_runner.go:130] > 3ec20f2e
	I0409 00:49:08.150447    2144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/98642.pem /etc/ssl/certs/3ec20f2e.0"
	I0409 00:49:08.177864    2144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0409 00:49:08.206114    2144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0409 00:49:08.213552    2144 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr  8 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0409 00:49:08.213775    2144 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0409 00:49:08.224768    2144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0409 00:49:08.232916    2144 command_runner.go:130] > b5213941
	I0409 00:49:08.243875    2144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0409 00:49:08.276194    2144 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0409 00:49:08.283132    2144 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0409 00:49:08.283640    2144 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0409 00:49:08.283640    2144 kubeadm.go:392] StartCluster: {Name:multinode-611500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:multinode-6115
00 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.113.157 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0409 00:49:08.292950    2144 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0409 00:49:08.333625    2144 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0409 00:49:08.352574    2144 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0409 00:49:08.352574    2144 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0409 00:49:08.352574    2144 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0409 00:49:08.363318    2144 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0409 00:49:08.392910    2144 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0409 00:49:08.410795    2144 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0409 00:49:08.410795    2144 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0409 00:49:08.410795    2144 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0409 00:49:08.410795    2144 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0409 00:49:08.410795    2144 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0409 00:49:08.410795    2144 kubeadm.go:157] found existing configuration files:
	
	I0409 00:49:08.426668    2144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0409 00:49:08.441591    2144 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0409 00:49:08.442394    2144 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0409 00:49:08.453840    2144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0409 00:49:08.485096    2144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0409 00:49:08.499159    2144 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0409 00:49:08.500484    2144 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0409 00:49:08.515653    2144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0409 00:49:08.543440    2144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0409 00:49:08.560643    2144 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0409 00:49:08.560675    2144 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0409 00:49:08.570782    2144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0409 00:49:08.597997    2144 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0409 00:49:08.614250    2144 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0409 00:49:08.615359    2144 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0409 00:49:08.626215    2144 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0409 00:49:08.650125    2144 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0409 00:49:08.931890    2144 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0409 00:49:08.931890    2144 command_runner.go:130] > [init] Using Kubernetes version: v1.32.2
	I0409 00:49:08.932056    2144 kubeadm.go:310] [preflight] Running pre-flight checks
	I0409 00:49:08.932152    2144 command_runner.go:130] > [preflight] Running pre-flight checks
	I0409 00:49:09.073854    2144 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0409 00:49:09.074383    2144 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0409 00:49:09.074647    2144 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0409 00:49:09.074647    2144 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0409 00:49:09.075019    2144 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0409 00:49:09.075019    2144 command_runner.go:130] > [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0409 00:49:09.094040    2144 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0409 00:49:09.094184    2144 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0409 00:49:09.099892    2144 out.go:235]   - Generating certificates and keys ...
	I0409 00:49:09.100080    2144 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0409 00:49:09.100080    2144 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0409 00:49:09.100080    2144 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0409 00:49:09.100295    2144 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0409 00:49:09.219744    2144 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0409 00:49:09.219813    2144 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0409 00:49:09.447567    2144 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0409 00:49:09.447567    2144 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0409 00:49:09.756149    2144 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0409 00:49:09.756149    2144 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0409 00:49:09.985452    2144 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0409 00:49:09.985538    2144 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0409 00:49:10.358148    2144 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0409 00:49:10.358148    2144 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0409 00:49:10.358698    2144 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-611500] and IPs [192.168.113.157 127.0.0.1 ::1]
	I0409 00:49:10.358698    2144 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-611500] and IPs [192.168.113.157 127.0.0.1 ::1]
	I0409 00:49:10.494976    2144 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0409 00:49:10.495061    2144 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0409 00:49:10.495671    2144 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-611500] and IPs [192.168.113.157 127.0.0.1 ::1]
	I0409 00:49:10.495700    2144 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-611500] and IPs [192.168.113.157 127.0.0.1 ::1]
	I0409 00:49:10.770202    2144 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0409 00:49:10.770287    2144 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0409 00:49:10.890908    2144 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0409 00:49:10.891011    2144 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0409 00:49:11.099837    2144 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0409 00:49:11.099933    2144 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0409 00:49:11.100074    2144 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0409 00:49:11.100074    2144 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0409 00:49:11.338096    2144 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0409 00:49:11.338096    2144 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0409 00:49:11.526747    2144 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0409 00:49:11.527814    2144 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0409 00:49:12.103564    2144 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0409 00:49:12.103564    2144 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0409 00:49:12.215742    2144 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0409 00:49:12.215742    2144 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0409 00:49:12.556185    2144 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0409 00:49:12.556185    2144 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0409 00:49:12.557183    2144 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0409 00:49:12.557183    2144 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0409 00:49:12.564518    2144 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0409 00:49:12.564518    2144 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0409 00:49:12.569827    2144 out.go:235]   - Booting up control plane ...
	I0409 00:49:12.569827    2144 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0409 00:49:12.569827    2144 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0409 00:49:12.570738    2144 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0409 00:49:12.570738    2144 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0409 00:49:12.570738    2144 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0409 00:49:12.570738    2144 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0409 00:49:12.592639    2144 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0409 00:49:12.592639    2144 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0409 00:49:12.601623    2144 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0409 00:49:12.601623    2144 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0409 00:49:12.601755    2144 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0409 00:49:12.601755    2144 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0409 00:49:12.841077    2144 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0409 00:49:12.841077    2144 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0409 00:49:12.841253    2144 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0409 00:49:12.841253    2144 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0409 00:49:13.348757    2144 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 507.468691ms
	I0409 00:49:13.348836    2144 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 507.468691ms
	I0409 00:49:13.348926    2144 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0409 00:49:13.349090    2144 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0409 00:49:19.851091    2144 kubeadm.go:310] [api-check] The API server is healthy after 6.502281756s
	I0409 00:49:19.851132    2144 command_runner.go:130] > [api-check] The API server is healthy after 6.502281756s
	I0409 00:49:19.868501    2144 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0409 00:49:19.868501    2144 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0409 00:49:19.897745    2144 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0409 00:49:19.897745    2144 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0409 00:49:19.956193    2144 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0409 00:49:19.956244    2144 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0409 00:49:19.956599    2144 command_runner.go:130] > [mark-control-plane] Marking the node multinode-611500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0409 00:49:19.956599    2144 kubeadm.go:310] [mark-control-plane] Marking the node multinode-611500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0409 00:49:19.978259    2144 kubeadm.go:310] [bootstrap-token] Using token: 30nkwg.vpaii8w5ok35o0cg
	I0409 00:49:19.978965    2144 command_runner.go:130] > [bootstrap-token] Using token: 30nkwg.vpaii8w5ok35o0cg
	I0409 00:49:19.981918    2144 out.go:235]   - Configuring RBAC rules ...
	I0409 00:49:19.982329    2144 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0409 00:49:19.982376    2144 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0409 00:49:19.996182    2144 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0409 00:49:19.996923    2144 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0409 00:49:20.010677    2144 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0409 00:49:20.010677    2144 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0409 00:49:20.026766    2144 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0409 00:49:20.026766    2144 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0409 00:49:20.034565    2144 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0409 00:49:20.034565    2144 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0409 00:49:20.041286    2144 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0409 00:49:20.041335    2144 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0409 00:49:20.262752    2144 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0409 00:49:20.262752    2144 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0409 00:49:20.779634    2144 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0409 00:49:20.779634    2144 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0409 00:49:21.272314    2144 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0409 00:49:21.272371    2144 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0409 00:49:21.273656    2144 kubeadm.go:310] 
	I0409 00:49:21.273731    2144 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0409 00:49:21.273928    2144 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0409 00:49:21.273991    2144 kubeadm.go:310] 
	I0409 00:49:21.274180    2144 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0409 00:49:21.274180    2144 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0409 00:49:21.274180    2144 kubeadm.go:310] 
	I0409 00:49:21.274180    2144 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0409 00:49:21.274180    2144 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0409 00:49:21.274180    2144 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0409 00:49:21.274180    2144 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0409 00:49:21.274180    2144 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0409 00:49:21.274180    2144 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0409 00:49:21.274180    2144 kubeadm.go:310] 
	I0409 00:49:21.274180    2144 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0409 00:49:21.274180    2144 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0409 00:49:21.274180    2144 kubeadm.go:310] 
	I0409 00:49:21.274864    2144 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0409 00:49:21.274864    2144 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0409 00:49:21.274864    2144 kubeadm.go:310] 
	I0409 00:49:21.275069    2144 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0409 00:49:21.275069    2144 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0409 00:49:21.275069    2144 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0409 00:49:21.275069    2144 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0409 00:49:21.275069    2144 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0409 00:49:21.275069    2144 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0409 00:49:21.275069    2144 kubeadm.go:310] 
	I0409 00:49:21.275069    2144 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0409 00:49:21.275624    2144 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0409 00:49:21.275803    2144 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0409 00:49:21.275803    2144 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0409 00:49:21.275803    2144 kubeadm.go:310] 
	I0409 00:49:21.275803    2144 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 30nkwg.vpaii8w5ok35o0cg \
	I0409 00:49:21.275803    2144 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 30nkwg.vpaii8w5ok35o0cg \
	I0409 00:49:21.276324    2144 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:aa5a4dda055a1a4ae6c54f5bc7c6626b2903d2da5858116de66a68e5e1fbf334 \
	I0409 00:49:21.276435    2144 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa5a4dda055a1a4ae6c54f5bc7c6626b2903d2da5858116de66a68e5e1fbf334 \
	I0409 00:49:21.276435    2144 command_runner.go:130] > 	--control-plane 
	I0409 00:49:21.276435    2144 kubeadm.go:310] 	--control-plane 
	I0409 00:49:21.276435    2144 kubeadm.go:310] 
	I0409 00:49:21.276435    2144 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0409 00:49:21.276435    2144 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0409 00:49:21.276435    2144 kubeadm.go:310] 
	I0409 00:49:21.276435    2144 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 30nkwg.vpaii8w5ok35o0cg \
	I0409 00:49:21.276435    2144 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 30nkwg.vpaii8w5ok35o0cg \
	I0409 00:49:21.277192    2144 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:aa5a4dda055a1a4ae6c54f5bc7c6626b2903d2da5858116de66a68e5e1fbf334 
	I0409 00:49:21.277192    2144 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:aa5a4dda055a1a4ae6c54f5bc7c6626b2903d2da5858116de66a68e5e1fbf334 
	I0409 00:49:21.278521    2144 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0409 00:49:21.278834    2144 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0409 00:49:21.278961    2144 cni.go:84] Creating CNI manager for ""
	I0409 00:49:21.278961    2144 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0409 00:49:21.285235    2144 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0409 00:49:21.300059    2144 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0409 00:49:21.310290    2144 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0409 00:49:21.310290    2144 command_runner.go:130] >   Size: 3103192   	Blocks: 6064       IO Block: 4096   regular file
	I0409 00:49:21.310366    2144 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0409 00:49:21.310366    2144 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0409 00:49:21.310366    2144 command_runner.go:130] > Access: 2025-04-09 00:47:27.563694100 +0000
	I0409 00:49:21.310366    2144 command_runner.go:130] > Modify: 2025-01-14 09:03:58.000000000 +0000
	I0409 00:49:21.310424    2144 command_runner.go:130] > Change: 2025-04-09 00:47:18.812000000 +0000
	I0409 00:49:21.310424    2144 command_runner.go:130] >  Birth: -
	I0409 00:49:21.310450    2144 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0409 00:49:21.310450    2144 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0409 00:49:21.356206    2144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0409 00:49:22.044003    2144 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0409 00:49:22.044078    2144 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0409 00:49:22.044078    2144 command_runner.go:130] > serviceaccount/kindnet created
	I0409 00:49:22.044078    2144 command_runner.go:130] > daemonset.apps/kindnet created
	I0409 00:49:22.044078    2144 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0409 00:49:22.055419    2144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0409 00:49:22.060523    2144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-611500 minikube.k8s.io/updated_at=2025_04_09T00_49_22_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=fd2f4c3eba2bd452b5997c855e28d0966165ba83 minikube.k8s.io/name=multinode-611500 minikube.k8s.io/primary=true
	I0409 00:49:22.074487    2144 command_runner.go:130] > -16
	I0409 00:49:22.074580    2144 ops.go:34] apiserver oom_adj: -16
	I0409 00:49:22.272318    2144 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0409 00:49:22.272478    2144 command_runner.go:130] > node/multinode-611500 labeled
	I0409 00:49:22.283590    2144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0409 00:49:22.394697    2144 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0409 00:49:22.783939    2144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0409 00:49:22.903639    2144 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0409 00:49:23.284751    2144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0409 00:49:23.396317    2144 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0409 00:49:23.785323    2144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0409 00:49:23.897998    2144 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0409 00:49:24.287179    2144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0409 00:49:24.388387    2144 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0409 00:49:24.784954    2144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0409 00:49:24.893013    2144 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0409 00:49:25.285121    2144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0409 00:49:25.402725    2144 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0409 00:49:25.785677    2144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0409 00:49:25.932013    2144 command_runner.go:130] > NAME      SECRETS   AGE
	I0409 00:49:25.932046    2144 command_runner.go:130] > default   0         0s
	I0409 00:49:25.932046    2144 kubeadm.go:1113] duration metric: took 3.8876726s to wait for elevateKubeSystemPrivileges
	I0409 00:49:25.932174    2144 kubeadm.go:394] duration metric: took 17.6482992s to StartCluster
	I0409 00:49:25.932290    2144 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 00:49:25.932523    2144 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0409 00:49:25.934677    2144 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 00:49:25.935555    2144 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0409 00:49:25.936088    2144 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.113.157 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0409 00:49:25.936088    2144 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0409 00:49:25.936440    2144 addons.go:69] Setting storage-provisioner=true in profile "multinode-611500"
	I0409 00:49:25.936440    2144 addons.go:69] Setting default-storageclass=true in profile "multinode-611500"
	I0409 00:49:25.936553    2144 addons.go:238] Setting addon storage-provisioner=true in "multinode-611500"
	I0409 00:49:25.936856    2144 config.go:182] Loaded profile config "multinode-611500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0409 00:49:25.937011    2144 host.go:66] Checking if "multinode-611500" exists ...
	I0409 00:49:25.936921    2144 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-611500"
	I0409 00:49:25.938646    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 00:49:25.939617    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 00:49:25.940491    2144 out.go:177] * Verifying Kubernetes components...
	I0409 00:49:25.958102    2144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0409 00:49:26.235121    2144 command_runner.go:130] > apiVersion: v1
	I0409 00:49:26.235521    2144 command_runner.go:130] > data:
	I0409 00:49:26.235607    2144 command_runner.go:130] >   Corefile: |
	I0409 00:49:26.235607    2144 command_runner.go:130] >     .:53 {
	I0409 00:49:26.235607    2144 command_runner.go:130] >         errors
	I0409 00:49:26.235607    2144 command_runner.go:130] >         health {
	I0409 00:49:26.235662    2144 command_runner.go:130] >            lameduck 5s
	I0409 00:49:26.235662    2144 command_runner.go:130] >         }
	I0409 00:49:26.235662    2144 command_runner.go:130] >         ready
	I0409 00:49:26.235662    2144 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0409 00:49:26.235662    2144 command_runner.go:130] >            pods insecure
	I0409 00:49:26.235662    2144 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0409 00:49:26.235662    2144 command_runner.go:130] >            ttl 30
	I0409 00:49:26.235662    2144 command_runner.go:130] >         }
	I0409 00:49:26.235662    2144 command_runner.go:130] >         prometheus :9153
	I0409 00:49:26.235662    2144 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0409 00:49:26.235662    2144 command_runner.go:130] >            max_concurrent 1000
	I0409 00:49:26.235662    2144 command_runner.go:130] >         }
	I0409 00:49:26.235662    2144 command_runner.go:130] >         cache 30 {
	I0409 00:49:26.235662    2144 command_runner.go:130] >            disable success cluster.local
	I0409 00:49:26.235662    2144 command_runner.go:130] >            disable denial cluster.local
	I0409 00:49:26.235662    2144 command_runner.go:130] >         }
	I0409 00:49:26.235662    2144 command_runner.go:130] >         loop
	I0409 00:49:26.235662    2144 command_runner.go:130] >         reload
	I0409 00:49:26.235662    2144 command_runner.go:130] >         loadbalance
	I0409 00:49:26.235662    2144 command_runner.go:130] >     }
	I0409 00:49:26.235662    2144 command_runner.go:130] > kind: ConfigMap
	I0409 00:49:26.235662    2144 command_runner.go:130] > metadata:
	I0409 00:49:26.235662    2144 command_runner.go:130] >   creationTimestamp: "2025-04-09T00:49:20Z"
	I0409 00:49:26.235662    2144 command_runner.go:130] >   name: coredns
	I0409 00:49:26.235662    2144 command_runner.go:130] >   namespace: kube-system
	I0409 00:49:26.235662    2144 command_runner.go:130] >   resourceVersion: "251"
	I0409 00:49:26.235662    2144 command_runner.go:130] >   uid: 1841b7c6-d285-4b2d-ade8-7c8739a5db14
	I0409 00:49:26.237345    2144 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.112.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0409 00:49:26.341696    2144 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0409 00:49:26.897062    2144 command_runner.go:130] > configmap/coredns replaced
	I0409 00:49:26.897177    2144 start.go:971] {"host.minikube.internal": 192.168.112.1} host record injected into CoreDNS's ConfigMap
	I0409 00:49:26.898802    2144 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0409 00:49:26.898802    2144 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0409 00:49:26.899204    2144 kapi.go:59] client config for multinode-611500: &rest.Config{Host:"https://192.168.113.157:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-611500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-611500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2809400), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0409 00:49:26.899204    2144 kapi.go:59] client config for multinode-611500: &rest.Config{Host:"https://192.168.113.157:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-611500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-611500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2809400), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0409 00:49:26.901534    2144 cert_rotation.go:140] Starting client certificate rotation controller
	I0409 00:49:26.901534    2144 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0409 00:49:26.901534    2144 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0409 00:49:26.901534    2144 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0409 00:49:26.901534    2144 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0409 00:49:26.902504    2144 node_ready.go:35] waiting up to 6m0s for node "multinode-611500" to be "Ready" ...
	I0409 00:49:26.902504    2144 deployment.go:95] "Request Body" body=""
	I0409 00:49:26.902504    2144 type.go:168] "Request Body" body=""
	I0409 00:49:26.902504    2144 round_trippers.go:470] GET https://192.168.113.157:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0409 00:49:26.902504    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:26.902504    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:26.902504    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:26.902504    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:26.902504    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:26.902504    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:26.902504    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:26.918128    2144 round_trippers.go:581] Response Status: 200 OK in 15 milliseconds
	I0409 00:49:26.918128    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:26.918128    2144 round_trippers.go:587]     Audit-Id: 0fe60289-0024-447d-985c-33cdadb3bb82
	I0409 00:49:26.918128    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:26.918128    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:26.918128    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:26.918128    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:26.918128    2144 round_trippers.go:587]     Content-Length: 144
	I0409 00:49:26.918128    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:26 GMT
	I0409 00:49:26.918128    2144 deployment.go:95] "Response Body" body=<
		00000000  6b 38 73 00 0a 17 0a 0e  61 75 74 6f 73 63 61 6c  |k8s.....autoscal|
		00000010  69 6e 67 2f 76 31 12 05  53 63 61 6c 65 12 6d 0a  |ing/v1..Scale.m.|
		00000020  51 0a 07 63 6f 72 65 64  6e 73 12 00 1a 0b 6b 75  |Q..coredns....ku|
		00000030  62 65 2d 73 79 73 74 65  6d 22 00 2a 24 33 32 66  |be-system".*$32f|
		00000040  37 30 66 61 63 2d 33 30  35 31 2d 34 38 36 61 2d  |70fac-3051-486a-|
		00000050  39 65 33 64 2d 62 61 30  39 64 39 64 33 33 65 30  |9e3d-ba09d9d33e0|
		00000060  66 32 03 33 36 34 38 00  42 08 08 90 88 d7 bf 06  |f2.3648.B.......|
		00000070  10 00 12 02 08 02 1a 14  08 02 12 10 6b 38 73 2d  |............k8s-|
		00000080  61 70 70 3d 6b 75 62 65  2d 64 6e 73 1a 00 22 00  |app=kube-dns..".|
	 >
	I0409 00:49:26.918128    2144 deployment.go:111] "Request Body" body=<
		00000000  6b 38 73 00 0a 17 0a 0e  61 75 74 6f 73 63 61 6c  |k8s.....autoscal|
		00000010  69 6e 67 2f 76 31 12 05  53 63 61 6c 65 12 6d 0a  |ing/v1..Scale.m.|
		00000020  51 0a 07 63 6f 72 65 64  6e 73 12 00 1a 0b 6b 75  |Q..coredns....ku|
		00000030  62 65 2d 73 79 73 74 65  6d 22 00 2a 24 33 32 66  |be-system".*$32f|
		00000040  37 30 66 61 63 2d 33 30  35 31 2d 34 38 36 61 2d  |70fac-3051-486a-|
		00000050  39 65 33 64 2d 62 61 30  39 64 39 64 33 33 65 30  |9e3d-ba09d9d33e0|
		00000060  66 32 03 33 36 34 38 00  42 08 08 90 88 d7 bf 06  |f2.3648.B.......|
		00000070  10 00 12 02 08 01 1a 14  08 02 12 10 6b 38 73 2d  |............k8s-|
		00000080  61 70 70 3d 6b 75 62 65  2d 64 6e 73 1a 00 22 00  |app=kube-dns..".|
	 >
	I0409 00:49:26.918128    2144 round_trippers.go:470] PUT https://192.168.113.157:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0409 00:49:26.918128    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:26.918128    2144 round_trippers.go:480]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:26.918128    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:26.918128    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:26.919128    2144 round_trippers.go:581] Response Status: 200 OK in 16 milliseconds
	I0409 00:49:26.919128    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:26.919128    2144 round_trippers.go:587]     Audit-Id: e9b37501-be57-453b-99d0-76879e6d8cf1
	I0409 00:49:26.919128    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:26.919128    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:26.919128    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:26.919128    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:26.919128    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:26 GMT
	I0409 00:49:26.920148    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bd 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 33 31  39 38 00 42 08 08 8d 88  |34242.3198.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20926 chars]
	 >
	I0409 00:49:26.940170    2144 round_trippers.go:581] Response Status: 200 OK in 22 milliseconds
	I0409 00:49:26.940170    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:26.940170    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:26.940170    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:26.940170    2144 round_trippers.go:587]     Content-Length: 144
	I0409 00:49:26.940170    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:26 GMT
	I0409 00:49:26.940170    2144 round_trippers.go:587]     Audit-Id: b7284ad3-6b96-45d0-a32b-2a31c6f6db67
	I0409 00:49:26.940170    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:26.940170    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:26.940170    2144 deployment.go:111] "Response Body" body=<
		00000000  6b 38 73 00 0a 17 0a 0e  61 75 74 6f 73 63 61 6c  |k8s.....autoscal|
		00000010  69 6e 67 2f 76 31 12 05  53 63 61 6c 65 12 6d 0a  |ing/v1..Scale.m.|
		00000020  51 0a 07 63 6f 72 65 64  6e 73 12 00 1a 0b 6b 75  |Q..coredns....ku|
		00000030  62 65 2d 73 79 73 74 65  6d 22 00 2a 24 33 32 66  |be-system".*$32f|
		00000040  37 30 66 61 63 2d 33 30  35 31 2d 34 38 36 61 2d  |70fac-3051-486a-|
		00000050  39 65 33 64 2d 62 61 30  39 64 39 64 33 33 65 30  |9e3d-ba09d9d33e0|
		00000060  66 32 03 33 37 31 38 00  42 08 08 90 88 d7 bf 06  |f2.3718.B.......|
		00000070  10 00 12 02 08 01 1a 14  08 02 12 10 6b 38 73 2d  |............k8s-|
		00000080  61 70 70 3d 6b 75 62 65  2d 64 6e 73 1a 00 22 00  |app=kube-dns..".|
	 >
	I0409 00:49:27.403138    2144 deployment.go:95] "Request Body" body=""
	I0409 00:49:27.403138    2144 type.go:168] "Request Body" body=""
	I0409 00:49:27.403138    2144 round_trippers.go:470] GET https://192.168.113.157:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0409 00:49:27.403138    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:27.403138    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:27.403138    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:27.403138    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:27.403138    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:27.403138    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:27.403138    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:27.407433    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:49:27.407433    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:27.407433    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:27.407433    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:27.407433    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:27 GMT
	I0409 00:49:27.407540    2144 round_trippers.go:587]     Audit-Id: 247d982e-4f21-4102-99cb-307779e1c382
	I0409 00:49:27.407540    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:27.407540    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:27.407604    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bd 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 33 31  39 38 00 42 08 08 8d 88  |34242.3198.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20926 chars]
	 >
	I0409 00:49:27.408206    2144 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0409 00:49:27.408206    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:27.408206    2144 round_trippers.go:587]     Audit-Id: f3a8280f-a053-4773-b832-e20343a924f6
	I0409 00:49:27.408206    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:27.408206    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:27.408206    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:27.408206    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:27.408206    2144 round_trippers.go:587]     Content-Length: 144
	I0409 00:49:27.408206    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:27 GMT
	I0409 00:49:27.408206    2144 deployment.go:95] "Response Body" body=<
		00000000  6b 38 73 00 0a 17 0a 0e  61 75 74 6f 73 63 61 6c  |k8s.....autoscal|
		00000010  69 6e 67 2f 76 31 12 05  53 63 61 6c 65 12 6d 0a  |ing/v1..Scale.m.|
		00000020  51 0a 07 63 6f 72 65 64  6e 73 12 00 1a 0b 6b 75  |Q..coredns....ku|
		00000030  62 65 2d 73 79 73 74 65  6d 22 00 2a 24 33 32 66  |be-system".*$32f|
		00000040  37 30 66 61 63 2d 33 30  35 31 2d 34 38 36 61 2d  |70fac-3051-486a-|
		00000050  39 65 33 64 2d 62 61 30  39 64 39 64 33 33 65 30  |9e3d-ba09d9d33e0|
		00000060  66 32 03 33 38 31 38 00  42 08 08 90 88 d7 bf 06  |f2.3818.B.......|
		00000070  10 00 12 02 08 01 1a 14  08 01 12 10 6b 38 73 2d  |............k8s-|
		00000080  61 70 70 3d 6b 75 62 65  2d 64 6e 73 1a 00 22 00  |app=kube-dns..".|
	 >
	I0409 00:49:27.408206    2144 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-611500" context rescaled to 1 replicas
	I0409 00:49:27.902906    2144 type.go:168] "Request Body" body=""
	I0409 00:49:27.902906    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:27.902906    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:27.902906    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:27.902906    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:27.906905    2144 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 00:49:27.906905    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:27.906905    2144 round_trippers.go:587]     Audit-Id: 6edfd604-e122-400d-a3e0-444ea2fee8c4
	I0409 00:49:27.906905    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:27.906905    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:27.906905    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:27.906905    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:27.906905    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:27 GMT
	I0409 00:49:27.906905    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bd 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 33 31  39 38 00 42 08 08 8d 88  |34242.3198.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20926 chars]
	 >
	I0409 00:49:28.287805    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:49:28.288800    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:49:28.289813    2144 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0409 00:49:28.289813    2144 kapi.go:59] client config for multinode-611500: &rest.Config{Host:"https://192.168.113.157:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-611500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-611500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2809400), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0409 00:49:28.290815    2144 addons.go:238] Setting addon default-storageclass=true in "multinode-611500"
	I0409 00:49:28.290815    2144 host.go:66] Checking if "multinode-611500" exists ...
	I0409 00:49:28.291853    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 00:49:28.296877    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:49:28.296877    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:49:28.307907    2144 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0409 00:49:28.310544    2144 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0409 00:49:28.310544    2144 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0409 00:49:28.310544    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 00:49:28.403551    2144 type.go:168] "Request Body" body=""
	I0409 00:49:28.403551    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:28.403551    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:28.403551    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:28.403551    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:28.408563    2144 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0409 00:49:28.408563    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:28.408563    2144 round_trippers.go:587]     Audit-Id: 1fb5f58e-f464-48d4-abe2-061142378a3b
	I0409 00:49:28.408563    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:28.408563    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:28.408563    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:28.408563    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:28.408563    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:28 GMT
	I0409 00:49:28.409565    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bd 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 33 31  39 38 00 42 08 08 8d 88  |34242.3198.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20926 chars]
	 >
	I0409 00:49:28.903164    2144 type.go:168] "Request Body" body=""
	I0409 00:49:28.903164    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:28.903164    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:28.903164    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:28.903164    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:28.910253    2144 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0409 00:49:28.910380    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:28.910380    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:28.910380    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:28.910380    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:28.910380    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:28 GMT
	I0409 00:49:28.910380    2144 round_trippers.go:587]     Audit-Id: 3e497d73-5865-48b2-b2ea-bf7962ee5ab0
	I0409 00:49:28.910380    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:28.925149    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bd 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 33 31  39 38 00 42 08 08 8d 88  |34242.3198.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20926 chars]
	 >
	I0409 00:49:28.925447    2144 node_ready.go:53] node "multinode-611500" has status "Ready":"False"
	I0409 00:49:29.468916    2144 type.go:168] "Request Body" body=""
	I0409 00:49:29.468916    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:29.468916    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:29.468916    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:29.468916    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:29.472976    2144 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 00:49:29.473180    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:29.473180    2144 round_trippers.go:587]     Audit-Id: 1993ec67-b545-4d73-855a-78563dedeb08
	I0409 00:49:29.473180    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:29.473180    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:29.473180    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:29.473180    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:29.473180    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:29 GMT
	I0409 00:49:29.473723    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bd 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 33 31  39 38 00 42 08 08 8d 88  |34242.3198.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20926 chars]
	 >
	I0409 00:49:29.903088    2144 type.go:168] "Request Body" body=""
	I0409 00:49:29.903088    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:29.903088    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:29.903088    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:29.903088    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:29.907104    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:49:29.907104    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:29.907104    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:29.907104    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:29.907104    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:29 GMT
	I0409 00:49:29.907104    2144 round_trippers.go:587]     Audit-Id: 8f639c59-7807-46df-a30c-9869af23e085
	I0409 00:49:29.907104    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:29.907104    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:29.907104    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bd 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 33 31  39 38 00 42 08 08 8d 88  |34242.3198.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20926 chars]
	 >
	I0409 00:49:30.403600    2144 type.go:168] "Request Body" body=""
	I0409 00:49:30.403600    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:30.403600    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:30.403600    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:30.403600    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:30.407858    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:49:30.407930    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:30.407930    2144 round_trippers.go:587]     Audit-Id: 294122e7-f9bc-4530-9527-5f02384cc958
	I0409 00:49:30.407930    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:30.407930    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:30.407930    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:30.408006    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:30.408006    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:30 GMT
	I0409 00:49:30.408396    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bd 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 33 31  39 38 00 42 08 08 8d 88  |34242.3198.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20926 chars]
	 >
	I0409 00:49:30.663808    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:49:30.664438    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:49:30.664524    2144 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0409 00:49:30.664524    2144 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0409 00:49:30.664605    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 00:49:30.903533    2144 type.go:168] "Request Body" body=""
	I0409 00:49:30.903533    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:30.903533    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:30.903533    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:30.903533    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:30.907737    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:49:30.907737    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:30.907737    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:30.907737    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:30.907737    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:30.907737    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:30 GMT
	I0409 00:49:30.907737    2144 round_trippers.go:587]     Audit-Id: 45d378eb-071e-427b-b31d-5d86dd12ece0
	I0409 00:49:30.907737    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:30.908070    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bd 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 33 31  39 38 00 42 08 08 8d 88  |34242.3198.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20926 chars]
	 >
	I0409 00:49:30.929679    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:49:30.930765    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:49:30.930805    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 00:49:31.403019    2144 type.go:168] "Request Body" body=""
	I0409 00:49:31.403019    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:31.403019    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:31.403019    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:31.403019    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:31.409581    2144 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0409 00:49:31.409581    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:31.409581    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:31.409792    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:31.409792    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:31.409792    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:31 GMT
	I0409 00:49:31.409792    2144 round_trippers.go:587]     Audit-Id: 13f119a7-3511-4c92-8b24-618e3108b9cc
	I0409 00:49:31.409792    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:31.410367    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bd 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 33 31  39 38 00 42 08 08 8d 88  |34242.3198.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20926 chars]
	 >
	I0409 00:49:31.410576    2144 node_ready.go:53] node "multinode-611500" has status "Ready":"False"
	I0409 00:49:31.902684    2144 type.go:168] "Request Body" body=""
	I0409 00:49:31.902684    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:31.902684    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:31.902684    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:31.902684    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:31.906286    2144 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 00:49:31.906366    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:31.906366    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:31.906366    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:31 GMT
	I0409 00:49:31.906366    2144 round_trippers.go:587]     Audit-Id: 073b4a21-c421-426c-990f-98058357a491
	I0409 00:49:31.906366    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:31.906456    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:31.906456    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:31.906675    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bd 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 33 31  39 38 00 42 08 08 8d 88  |34242.3198.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20926 chars]
	 >
	I0409 00:49:32.403363    2144 type.go:168] "Request Body" body=""
	I0409 00:49:32.403363    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:32.403363    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:32.403363    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:32.403363    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:32.407855    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:49:32.407855    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:32.407855    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:32 GMT
	I0409 00:49:32.407977    2144 round_trippers.go:587]     Audit-Id: 44e07165-fa5d-443f-82a8-b77c45915c5f
	I0409 00:49:32.407977    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:32.407977    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:32.407977    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:32.407977    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:32.408251    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bd 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 33 31  39 38 00 42 08 08 8d 88  |34242.3198.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20926 chars]
	 >
	I0409 00:49:32.903675    2144 type.go:168] "Request Body" body=""
	I0409 00:49:32.903675    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:32.903675    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:32.903675    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:32.903675    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:32.908013    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:49:32.908089    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:32.908089    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:32.908203    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:32.908263    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:32.908263    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:32.908263    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:32 GMT
	I0409 00:49:32.908263    2144 round_trippers.go:587]     Audit-Id: c8ddf602-d6f3-43b9-a200-a80fa2a58ddd
	I0409 00:49:32.908588    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bd 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 33 31  39 38 00 42 08 08 8d 88  |34242.3198.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20926 chars]
	 >
	I0409 00:49:33.023278    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:49:33.023862    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:49:33.023862    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 00:49:33.403242    2144 type.go:168] "Request Body" body=""
	I0409 00:49:33.403300    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:33.403300    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:33.403300    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:33.403300    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:33.407063    2144 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 00:49:33.407063    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:33.407063    2144 round_trippers.go:587]     Audit-Id: b7e370cc-ceef-48eb-a561-219a3a32a90a
	I0409 00:49:33.407063    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:33.407063    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:33.407063    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:33.407063    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:33.407063    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:33 GMT
	I0409 00:49:33.407958    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bd 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 33 31  39 38 00 42 08 08 8d 88  |34242.3198.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20926 chars]
	 >
	I0409 00:49:33.676488    2144 main.go:141] libmachine: [stdout =====>] : 192.168.113.157
	
	I0409 00:49:33.676488    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:49:33.676692    2144 sshutil.go:53] new ssh client: &{IP:192.168.113.157 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-611500\id_rsa Username:docker}
	I0409 00:49:33.832568    2144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0409 00:49:33.903285    2144 type.go:168] "Request Body" body=""
	I0409 00:49:33.903285    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:33.903285    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:33.903285    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:33.903285    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:34.102610    2144 round_trippers.go:581] Response Status: 200 OK in 199 milliseconds
	I0409 00:49:34.102610    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:34.102610    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:34 GMT
	I0409 00:49:34.102610    2144 round_trippers.go:587]     Audit-Id: c75ce85e-8196-4ccb-87db-16867af717a2
	I0409 00:49:34.102610    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:34.102610    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:34.102610    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:34.102610    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:34.102610    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bd 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 33 31  39 38 00 42 08 08 8d 88  |34242.3198.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20926 chars]
	 >
	I0409 00:49:34.103496    2144 node_ready.go:53] node "multinode-611500" has status "Ready":"False"
	I0409 00:49:34.381795    2144 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0409 00:49:34.381856    2144 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0409 00:49:34.381945    2144 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0409 00:49:34.382003    2144 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0409 00:49:34.382003    2144 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0409 00:49:34.382003    2144 command_runner.go:130] > pod/storage-provisioner created
	I0409 00:49:34.403260    2144 type.go:168] "Request Body" body=""
	I0409 00:49:34.403260    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:34.403260    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:34.403260    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:34.403260    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:34.405899    2144 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 00:49:34.405899    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:34.405899    2144 round_trippers.go:587]     Audit-Id: 4f5f3ceb-8ca9-4844-8978-20a79a28ad73
	I0409 00:49:34.405899    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:34.405899    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:34.405899    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:34.405899    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:34.405899    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:34 GMT
	I0409 00:49:34.406886    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bd 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 33 31  39 38 00 42 08 08 8d 88  |34242.3198.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20926 chars]
	 >
	I0409 00:49:34.903585    2144 type.go:168] "Request Body" body=""
	I0409 00:49:34.903585    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:34.903585    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:34.903585    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:34.903585    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:34.908791    2144 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0409 00:49:34.908921    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:34.908921    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:34.908921    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:34.908921    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:34.908921    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:34 GMT
	I0409 00:49:34.908921    2144 round_trippers.go:587]     Audit-Id: 305c88c0-7e4c-4d93-8867-92c245388188
	I0409 00:49:34.908921    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:34.909299    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bd 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 33 31  39 38 00 42 08 08 8d 88  |34242.3198.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20926 chars]
	 >
	I0409 00:49:35.403059    2144 type.go:168] "Request Body" body=""
	I0409 00:49:35.403059    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:35.403059    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:35.403059    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:35.403059    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:35.407318    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:49:35.407867    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:35.407867    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:35.407867    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:35.407867    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:35.407867    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:35.407867    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:35 GMT
	I0409 00:49:35.407867    2144 round_trippers.go:587]     Audit-Id: ed443fcf-19f5-40e4-bd4f-312ade79d665
	I0409 00:49:35.408455    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bd 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 33 31  39 38 00 42 08 08 8d 88  |34242.3198.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20926 chars]
	 >
	I0409 00:49:35.631116    2144 main.go:141] libmachine: [stdout =====>] : 192.168.113.157
	
	I0409 00:49:35.631116    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:49:35.631925    2144 sshutil.go:53] new ssh client: &{IP:192.168.113.157 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-611500\id_rsa Username:docker}
	I0409 00:49:35.769710    2144 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0409 00:49:35.902094    2144 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0409 00:49:35.902946    2144 type.go:204] "Request Body" body=""
	I0409 00:49:35.902946    2144 round_trippers.go:470] GET https://192.168.113.157:8443/apis/storage.k8s.io/v1/storageclasses
	I0409 00:49:35.902946    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:35.902946    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:35.902946    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:35.902946    2144 type.go:168] "Request Body" body=""
	I0409 00:49:35.902946    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:35.902946    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:35.902946    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:35.902946    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:35.906416    2144 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 00:49:35.906491    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:35.906491    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:35.906491    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:35.906491    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:35.906491    2144 round_trippers.go:587]     Content-Length: 957
	I0409 00:49:35.906573    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:35 GMT
	I0409 00:49:35.906491    2144 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 00:49:35.906573    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:35.906639    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:35.906573    2144 round_trippers.go:587]     Audit-Id: 240bc8fe-fc47-4b83-b22d-24e2d464198a
	I0409 00:49:35.906639    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:35.906639    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:35.906639    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:35.906639    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:35.906639    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:35 GMT
	I0409 00:49:35.906639    2144 round_trippers.go:587]     Audit-Id: de43ccef-2d42-4a0d-a508-a8e8487c158f
	I0409 00:49:35.906965    2144 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 25 0a 11  73 74 6f 72 61 67 65 2e  |k8s..%..storage.|
		00000010  6b 38 73 2e 69 6f 2f 76  31 12 10 53 74 6f 72 61  |k8s.io/v1..Stora|
		00000020  67 65 43 6c 61 73 73 4c  69 73 74 12 8b 07 0a 09  |geClassList.....|
		00000030  0a 00 12 03 34 31 31 1a  00 12 fd 06 0a cd 06 0a  |....411.........|
		00000040  08 73 74 61 6e 64 61 72  64 12 00 1a 00 22 00 2a  |.standard....".*|
		00000050  24 37 39 31 33 66 31 66  37 2d 30 34 31 32 2d 34  |$7913f1f7-0412-4|
		00000060  62 39 39 2d 38 38 32 62  2d 64 64 37 34 64 66 34  |b99-882b-dd74df4|
		00000070  61 32 65 32 39 32 03 34  31 31 38 00 42 08 08 9f  |a2e292.4118.B...|
		00000080  88 d7 bf 06 10 00 5a 2f  0a 1f 61 64 64 6f 6e 6d  |......Z/..addonm|
		00000090  61 6e 61 67 65 72 2e 6b  75 62 65 72 6e 65 74 65  |anager.kubernete|
		000000a0  73 2e 69 6f 2f 6d 6f 64  65 12 0c 45 6e 73 75 72  |s.io/mode..Ensur|
		000000b0  65 45 78 69 73 74 73 62  b7 02 0a 30 6b 75 62 65  |eExistsb...0kube|
		000000c0  63 74 6c 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |ctl.kubernetes. [truncated 3713 chars]
	 >
	I0409 00:49:35.906965    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bd 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 33 31  39 38 00 42 08 08 8d 88  |34242.3198.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20926 chars]
	 >
	I0409 00:49:35.906965    2144 type.go:267] "Request Body" body=<
		00000000  6b 38 73 00 0a 21 0a 11  73 74 6f 72 61 67 65 2e  |k8s..!..storage.|
		00000010  6b 38 73 2e 69 6f 2f 76  31 12 0c 53 74 6f 72 61  |k8s.io/v1..Stora|
		00000020  67 65 43 6c 61 73 73 12  fd 06 0a cd 06 0a 08 73  |geClass........s|
		00000030  74 61 6e 64 61 72 64 12  00 1a 00 22 00 2a 24 37  |tandard....".*$7|
		00000040  39 31 33 66 31 66 37 2d  30 34 31 32 2d 34 62 39  |913f1f7-0412-4b9|
		00000050  39 2d 38 38 32 62 2d 64  64 37 34 64 66 34 61 32  |9-882b-dd74df4a2|
		00000060  65 32 39 32 03 34 31 31  38 00 42 08 08 9f 88 d7  |e292.4118.B.....|
		00000070  bf 06 10 00 5a 2f 0a 1f  61 64 64 6f 6e 6d 61 6e  |....Z/..addonman|
		00000080  61 67 65 72 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |ager.kubernetes.|
		00000090  69 6f 2f 6d 6f 64 65 12  0c 45 6e 73 75 72 65 45  |io/mode..EnsureE|
		000000a0  78 69 73 74 73 62 b7 02  0a 30 6b 75 62 65 63 74  |xistsb...0kubect|
		000000b0  6c 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |l.kubernetes.io/|
		000000c0  6c 61 73 74 2d 61 70 70  6c 69 65 64 2d 63 6f 6e  |last-applied-co [truncated 3632 chars]
	 >
	I0409 00:49:35.906965    2144 round_trippers.go:470] PUT https://192.168.113.157:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0409 00:49:35.906965    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:35.906965    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:35.906965    2144 round_trippers.go:480]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:35.906965    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:35.910374    2144 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 00:49:35.910374    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:35.910374    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:35.910374    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:35.910486    2144 round_trippers.go:587]     Content-Length: 939
	I0409 00:49:35.910486    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:35 GMT
	I0409 00:49:35.910486    2144 round_trippers.go:587]     Audit-Id: ed10335a-df4e-4b42-af97-7d51671ed19c
	I0409 00:49:35.910486    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:35.910486    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:35.910546    2144 type.go:267] "Response Body" body=<
		00000000  6b 38 73 00 0a 21 0a 11  73 74 6f 72 61 67 65 2e  |k8s..!..storage.|
		00000010  6b 38 73 2e 69 6f 2f 76  31 12 0c 53 74 6f 72 61  |k8s.io/v1..Stora|
		00000020  67 65 43 6c 61 73 73 12  fd 06 0a cd 06 0a 08 73  |geClass........s|
		00000030  74 61 6e 64 61 72 64 12  00 1a 00 22 00 2a 24 37  |tandard....".*$7|
		00000040  39 31 33 66 31 66 37 2d  30 34 31 32 2d 34 62 39  |913f1f7-0412-4b9|
		00000050  39 2d 38 38 32 62 2d 64  64 37 34 64 66 34 61 32  |9-882b-dd74df4a2|
		00000060  65 32 39 32 03 34 31 31  38 00 42 08 08 9f 88 d7  |e292.4118.B.....|
		00000070  bf 06 10 00 5a 2f 0a 1f  61 64 64 6f 6e 6d 61 6e  |....Z/..addonman|
		00000080  61 67 65 72 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |ager.kubernetes.|
		00000090  69 6f 2f 6d 6f 64 65 12  0c 45 6e 73 75 72 65 45  |io/mode..EnsureE|
		000000a0  78 69 73 74 73 62 b7 02  0a 30 6b 75 62 65 63 74  |xistsb...0kubect|
		000000b0  6c 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |l.kubernetes.io/|
		000000c0  6c 61 73 74 2d 61 70 70  6c 69 65 64 2d 63 6f 6e  |last-applied-co [truncated 3632 chars]
	 >
	I0409 00:49:35.916108    2144 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0409 00:49:35.918910    2144 addons.go:514] duration metric: took 9.9826895s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0409 00:49:36.403366    2144 type.go:168] "Request Body" body=""
	I0409 00:49:36.403453    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:36.403453    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:36.403453    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:36.403453    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:36.406900    2144 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 00:49:36.406968    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:36.406968    2144 round_trippers.go:587]     Audit-Id: 9962a0fd-764e-437b-b052-e36173b25d32
	I0409 00:49:36.406968    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:36.406968    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:36.406968    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:36.406968    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:36.406968    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:36 GMT
	I0409 00:49:36.407074    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bd 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 33 31  39 38 00 42 08 08 8d 88  |34242.3198.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20926 chars]
	 >
	I0409 00:49:36.407618    2144 node_ready.go:53] node "multinode-611500" has status "Ready":"False"
	I0409 00:49:36.903759    2144 type.go:168] "Request Body" body=""
	I0409 00:49:36.903759    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:36.903759    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:36.903759    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:36.903759    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:36.907966    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:49:36.907966    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:36.908114    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:36.908114    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:36.908114    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:36.908114    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:36.908114    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:36 GMT
	I0409 00:49:36.908114    2144 round_trippers.go:587]     Audit-Id: d812e147-ca67-4234-9330-53d8d29c67ea
	I0409 00:49:36.908534    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bd 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 33 31  39 38 00 42 08 08 8d 88  |34242.3198.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20926 chars]
	 >
	I0409 00:49:37.403074    2144 type.go:168] "Request Body" body=""
	I0409 00:49:37.403074    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:37.403074    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:37.403074    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:37.403074    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:37.406843    2144 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 00:49:37.406843    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:37.406843    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:37.406843    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:37.406843    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:37 GMT
	I0409 00:49:37.406843    2144 round_trippers.go:587]     Audit-Id: 924b3546-907b-4307-b80f-3c71e9add750
	I0409 00:49:37.406843    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:37.406843    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:37.407270    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bd 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 33 31  39 38 00 42 08 08 8d 88  |34242.3198.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20926 chars]
	 >
	I0409 00:49:37.903627    2144 type.go:168] "Request Body" body=""
	I0409 00:49:37.903738    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:37.903738    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:37.903792    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:37.903792    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:37.908527    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:49:37.908630    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:37.908630    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:37.908630    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:37 GMT
	I0409 00:49:37.908630    2144 round_trippers.go:587]     Audit-Id: 2cf5dc72-d642-46dc-a79b-a4cbe0336905
	I0409 00:49:37.908630    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:37.908630    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:37.908630    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:37.909171    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bd 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 33 31  39 38 00 42 08 08 8d 88  |34242.3198.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20926 chars]
	 >
	I0409 00:49:38.403244    2144 type.go:168] "Request Body" body=""
	I0409 00:49:38.403244    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:38.403244    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:38.403244    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:38.403244    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:38.406492    2144 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 00:49:38.406492    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:38.407511    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:38.407511    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:38.407511    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:38 GMT
	I0409 00:49:38.407596    2144 round_trippers.go:587]     Audit-Id: 7ee4e845-7ef9-400a-ac78-38199e803e90
	I0409 00:49:38.407596    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:38.407596    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:38.408553    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bd 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 33 31  39 38 00 42 08 08 8d 88  |34242.3198.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20926 chars]
	 >
	I0409 00:49:38.408764    2144 node_ready.go:53] node "multinode-611500" has status "Ready":"False"
	I0409 00:49:38.903384    2144 type.go:168] "Request Body" body=""
	I0409 00:49:38.903384    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:38.903384    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:38.903384    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:38.903384    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:38.907456    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:49:38.907456    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:38.907536    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:38.907536    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:38 GMT
	I0409 00:49:38.907536    2144 round_trippers.go:587]     Audit-Id: 4ecd56f1-ac47-467e-8a25-40e12d1a695c
	I0409 00:49:38.907536    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:38.907536    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:38.907536    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:38.907962    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bd 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 33 31  39 38 00 42 08 08 8d 88  |34242.3198.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20926 chars]
	 >
	I0409 00:49:39.403673    2144 type.go:168] "Request Body" body=""
	I0409 00:49:39.404265    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:39.404265    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:39.404323    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:39.404323    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:39.408165    2144 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 00:49:39.408165    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:39.408268    2144 round_trippers.go:587]     Audit-Id: 20067790-016c-4676-a196-d6a620a2dad9
	I0409 00:49:39.408268    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:39.408268    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:39.408268    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:39.408268    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:39.408268    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:39 GMT
	I0409 00:49:39.408699    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bd 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 33 31  39 38 00 42 08 08 8d 88  |34242.3198.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20926 chars]
	 >
	I0409 00:49:39.902915    2144 type.go:168] "Request Body" body=""
	I0409 00:49:39.903386    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:39.903513    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:39.903574    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:39.903574    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:39.907546    2144 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 00:49:39.907614    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:39.907614    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:39.907614    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:39.907614    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:39.907614    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:39.907614    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:39 GMT
	I0409 00:49:39.907614    2144 round_trippers.go:587]     Audit-Id: a48cd904-bb61-4657-bb2e-ea818f58cb83
	I0409 00:49:39.908456    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bd 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 33 31  39 38 00 42 08 08 8d 88  |34242.3198.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20926 chars]
	 >
	I0409 00:49:40.403275    2144 type.go:168] "Request Body" body=""
	I0409 00:49:40.403368    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:40.403368    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:40.403368    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:40.403368    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:40.408320    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:49:40.408320    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:40.408320    2144 round_trippers.go:587]     Audit-Id: db9ea638-bc1d-449d-8236-25180233d332
	I0409 00:49:40.408320    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:40.408320    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:40.408320    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:40.408320    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:40.408480    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:40 GMT
	I0409 00:49:40.408785    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bd 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 33 31  39 38 00 42 08 08 8d 88  |34242.3198.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20926 chars]
	 >
	I0409 00:49:40.408969    2144 node_ready.go:53] node "multinode-611500" has status "Ready":"False"
	I0409 00:49:40.902749    2144 type.go:168] "Request Body" body=""
	I0409 00:49:40.902749    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:40.902749    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:40.902749    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:40.902749    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:40.906862    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:49:40.906862    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:40.906862    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:40.906862    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:40 GMT
	I0409 00:49:40.906862    2144 round_trippers.go:587]     Audit-Id: 4e45a08b-be32-43d9-bcd8-9bff0d387838
	I0409 00:49:40.906862    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:40.906862    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:40.906862    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:40.906862    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bd 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 33 31  39 38 00 42 08 08 8d 88  |34242.3198.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20926 chars]
	 >
	I0409 00:49:41.403441    2144 type.go:168] "Request Body" body=""
	I0409 00:49:41.403441    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:41.403441    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:41.403441    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:41.403441    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:41.407627    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:49:41.407674    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:41.407674    2144 round_trippers.go:587]     Audit-Id: 0b74767f-4509-45f9-a8c0-2f732a0649ea
	I0409 00:49:41.407674    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:41.407674    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:41.407674    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:41.407674    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:41.407674    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:41 GMT
	I0409 00:49:41.407674    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bd 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 33 31  39 38 00 42 08 08 8d 88  |34242.3198.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20926 chars]
	 >
	I0409 00:49:41.902935    2144 type.go:168] "Request Body" body=""
	I0409 00:49:41.902935    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:41.902935    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:41.902935    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:41.902935    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:41.907478    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:49:41.907478    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:41.907478    2144 round_trippers.go:587]     Audit-Id: 0564809b-7e96-45d4-a461-8eda230f42d1
	I0409 00:49:41.907478    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:41.907478    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:41.907478    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:41.907478    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:41.907478    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:41 GMT
	I0409 00:49:41.908450    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bd 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 33 31  39 38 00 42 08 08 8d 88  |34242.3198.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20926 chars]
	 >
	I0409 00:49:42.403813    2144 type.go:168] "Request Body" body=""
	I0409 00:49:42.403982    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:42.403982    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:42.403982    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:42.403982    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:42.408238    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:49:42.408268    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:42.408268    2144 round_trippers.go:587]     Audit-Id: b32f1e24-a929-452d-85f3-5e62a1f573b7
	I0409 00:49:42.408268    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:42.408329    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:42.408329    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:42.408329    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:42.408329    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:42 GMT
	I0409 00:49:42.411018    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bd 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 33 31  39 38 00 42 08 08 8d 88  |34242.3198.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20926 chars]
	 >
	I0409 00:49:42.411018    2144 node_ready.go:53] node "multinode-611500" has status "Ready":"False"
	I0409 00:49:42.902982    2144 type.go:168] "Request Body" body=""
	I0409 00:49:42.902982    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:42.902982    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:42.902982    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:42.902982    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:42.907446    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:49:42.907446    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:42.907531    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:42 GMT
	I0409 00:49:42.907531    2144 round_trippers.go:587]     Audit-Id: 8b7d6653-27fa-4aa6-b86b-2f80a6fe27eb
	I0409 00:49:42.907531    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:42.907531    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:42.907531    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:42.907531    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:42.907997    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bd 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 33 31  39 38 00 42 08 08 8d 88  |34242.3198.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20926 chars]
	 >
	I0409 00:49:43.403704    2144 type.go:168] "Request Body" body=""
	I0409 00:49:43.403704    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:43.403704    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:43.403704    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:43.403704    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:43.409073    2144 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0409 00:49:43.409151    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:43.409151    2144 round_trippers.go:587]     Audit-Id: 4bc1f2c5-4e08-4eea-9180-89d8c1b8d283
	I0409 00:49:43.409151    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:43.409151    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:43.409151    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:43.409151    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:43.409151    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:43 GMT
	I0409 00:49:43.409564    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bd 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 33 31  39 38 00 42 08 08 8d 88  |34242.3198.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20926 chars]
	 >
	I0409 00:49:43.903659    2144 type.go:168] "Request Body" body=""
	I0409 00:49:43.903774    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:43.903850    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:43.903896    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:43.903896    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:43.908584    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:49:43.908584    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:43.908584    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:43.908584    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:43.908584    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:43.908584    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:43.908584    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:43 GMT
	I0409 00:49:43.908584    2144 round_trippers.go:587]     Audit-Id: 8abfb659-3565-47c9-9ca6-2c9e2611c3e2
	I0409 00:49:43.909512    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bd 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 33 31  39 38 00 42 08 08 8d 88  |34242.3198.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20926 chars]
	 >
	I0409 00:49:44.403209    2144 type.go:168] "Request Body" body=""
	I0409 00:49:44.403209    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:44.403209    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:44.403209    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:44.403209    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:44.407102    2144 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 00:49:44.407182    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:44.407182    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:44 GMT
	I0409 00:49:44.407182    2144 round_trippers.go:587]     Audit-Id: 2df74b1a-8d9c-4be1-91ac-978f59e43dad
	I0409 00:49:44.407182    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:44.407182    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:44.407182    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:44.407182    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:44.408235    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bd 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 33 31  39 38 00 42 08 08 8d 88  |34242.3198.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20926 chars]
	 >
	I0409 00:49:44.902891    2144 type.go:168] "Request Body" body=""
	I0409 00:49:44.902891    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:44.902891    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:44.902891    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:44.902891    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:44.907152    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:49:44.907747    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:44.907803    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:44 GMT
	I0409 00:49:44.907803    2144 round_trippers.go:587]     Audit-Id: fec221c8-41d0-4a92-a494-87eb78e58169
	I0409 00:49:44.907803    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:44.907803    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:44.907839    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:44.907839    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:44.908069    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bd 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 33 31  39 38 00 42 08 08 8d 88  |34242.3198.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20926 chars]
	 >
	I0409 00:49:44.908069    2144 node_ready.go:53] node "multinode-611500" has status "Ready":"False"
	I0409 00:49:45.403627    2144 type.go:168] "Request Body" body=""
	I0409 00:49:45.404159    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:45.404159    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:45.404316    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:45.404316    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:45.407379    2144 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 00:49:45.407379    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:45.407379    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:45.407492    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:45.407492    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:45.407492    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:45.407492    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:45 GMT
	I0409 00:49:45.407492    2144 round_trippers.go:587]     Audit-Id: b4605d09-23f5-4679-acf2-9cea822996c0
	I0409 00:49:45.407602    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bd 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 33 31  39 38 00 42 08 08 8d 88  |34242.3198.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20926 chars]
	 >
	I0409 00:49:45.903839    2144 type.go:168] "Request Body" body=""
	I0409 00:49:45.904036    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:45.904036    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:45.904036    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:45.904036    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:45.907183    2144 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 00:49:45.907183    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:45.907183    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:45.907183    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:45.907183    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:45.907183    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:45 GMT
	I0409 00:49:45.907183    2144 round_trippers.go:587]     Audit-Id: 6cf4fadf-f99f-4781-b7a9-f89b83df350f
	I0409 00:49:45.907183    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:45.908239    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bd 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 33 31  39 38 00 42 08 08 8d 88  |34242.3198.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20926 chars]
	 >
	I0409 00:49:46.402964    2144 type.go:168] "Request Body" body=""
	I0409 00:49:46.402964    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:46.402964    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:46.402964    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:46.402964    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:46.407240    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:49:46.407766    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:46.407893    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:46.407893    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:46.407893    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:46 GMT
	I0409 00:49:46.407893    2144 round_trippers.go:587]     Audit-Id: a9925b8a-8d1a-4759-b89b-d01a1b76a244
	I0409 00:49:46.407893    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:46.407893    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:46.407893    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 c4 21 0a fc 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..!.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 34 31  37 38 00 42 08 08 8d 88  |34242.4178.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20301 chars]
	 >
	I0409 00:49:46.408506    2144 node_ready.go:49] node "multinode-611500" has status "Ready":"True"
	I0409 00:49:46.408506    2144 node_ready.go:38] duration metric: took 19.5057427s for node "multinode-611500" to be "Ready" ...
	I0409 00:49:46.408506    2144 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0409 00:49:46.408506    2144 type.go:204] "Request Body" body=""
	I0409 00:49:46.408506    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/namespaces/kube-system/pods
	I0409 00:49:46.408506    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:46.408506    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:46.408506    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:46.416165    2144 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0409 00:49:46.416165    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:46.416165    2144 round_trippers.go:587]     Audit-Id: e42c00d4-a4d8-4572-9c3d-ed57752770ed
	I0409 00:49:46.416165    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:46.416165    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:46.416165    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:46.416165    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:46.416165    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:46 GMT
	I0409 00:49:46.416165    2144 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 df c6 02 0a  09 0a 00 12 03 34 32 33  |ist..........423|
		00000020  1a 00 12 db 26 0a 8b 19  0a 18 63 6f 72 65 64 6e  |....&.....coredn|
		00000030  73 2d 36 36 38 64 36 62  66 39 62 63 2d 64 35 34  |s-668d6bf9bc-d54|
		00000040  73 34 12 13 63 6f 72 65  64 6e 73 2d 36 36 38 64  |s4..coredns-668d|
		00000050  36 62 66 39 62 63 2d 1a  0b 6b 75 62 65 2d 73 79  |6bf9bc-..kube-sy|
		00000060  73 74 65 6d 22 00 2a 24  31 32 34 33 31 66 32 37  |stem".*$12431f27|
		00000070  2d 37 65 34 65 2d 34 31  63 39 2d 38 64 35 34 2d  |-7e4e-41c9-8d54-|
		00000080  62 63 37 62 65 32 30 37  34 62 39 63 32 03 34 32  |bc7be2074b9c2.42|
		00000090  33 38 00 42 08 08 96 88  d7 bf 06 10 00 5a 13 0a  |38.B.........Z..|
		000000a0  07 6b 38 73 2d 61 70 70  12 08 6b 75 62 65 2d 64  |.k8s-app..kube-d|
		000000b0  6e 73 5a 1f 0a 11 70 6f  64 2d 74 65 6d 70 6c 61  |nsZ...pod-templa|
		000000c0  74 65 2d 68 61 73 68 12  0a 36 36 38 64 36 62 66  |te-hash..668d6b [truncated 205634 chars]
	 >
	I0409 00:49:46.418967    2144 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-d54s4" in "kube-system" namespace to be "Ready" ...
	I0409 00:49:46.418967    2144 type.go:168] "Request Body" body=""
	I0409 00:49:46.418967    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-d54s4
	I0409 00:49:46.418967    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:46.418967    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:46.418967    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:46.421922    2144 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 00:49:46.421922    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:46.421922    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:46.421922    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:46 GMT
	I0409 00:49:46.421922    2144 round_trippers.go:587]     Audit-Id: cd948279-b901-485a-8f6c-06cb8bd58621
	I0409 00:49:46.421922    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:46.421922    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:46.421922    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:46.425883    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  db 26 0a 8b 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.&.....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 64 35 34 73 34 12  |68d6bf9bc-d54s4.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 31 32 34  33 31 66 32 37 2d 37 65  |m".*$12431f27-7e|
		00000060  34 65 2d 34 31 63 39 2d  38 64 35 34 2d 62 63 37  |4e-41c9-8d54-bc7|
		00000070  62 65 32 30 37 34 62 39  63 32 03 34 32 33 38 00  |be2074b9c2.4238.|
		00000080  42 08 08 96 88 d7 bf 06  10 00 5a 13 0a 07 6b 38  |B.........Z...k8|
		00000090  73 2d 61 70 70 12 08 6b  75 62 65 2d 64 6e 73 5a  |s-app..kube-dnsZ|
		000000a0  1f 0a 11 70 6f 64 2d 74  65 6d 70 6c 61 74 65 2d  |...pod-template-|
		000000b0  68 61 73 68 12 0a 36 36  38 64 36 62 66 39 62 63  |hash..668d6bf9bc|
		000000c0  6a 53 0a 0a 52 65 70 6c  69 63 61 53 65 74 1a 12  |jS..ReplicaSet. [truncated 23609 chars]
	 >
	I0409 00:49:46.425883    2144 type.go:168] "Request Body" body=""
	I0409 00:49:46.425883    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:46.425883    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:46.425883    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:46.427026    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:46.429514    2144 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 00:49:46.429514    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:46.429514    2144 round_trippers.go:587]     Audit-Id: 1a772d20-51b2-475d-9935-66a73629aa85
	I0409 00:49:46.429514    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:46.429514    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:46.429514    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:46.429514    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:46.429514    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:46 GMT
	I0409 00:49:46.429514    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 c4 21 0a fc 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..!.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 34 31  37 38 00 42 08 08 8d 88  |34242.4178.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20301 chars]
	 >
	I0409 00:49:46.919696    2144 type.go:168] "Request Body" body=""
	I0409 00:49:46.919696    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-d54s4
	I0409 00:49:46.919696    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:46.919696    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:46.919696    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:46.923871    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:49:46.923871    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:46.923871    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:46.923871    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:46.923871    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:46 GMT
	I0409 00:49:46.923871    2144 round_trippers.go:587]     Audit-Id: 37392b03-836e-463b-a7ec-24778a5f66af
	I0409 00:49:46.923871    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:46.923871    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:46.923871    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  db 26 0a 8b 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.&.....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 64 35 34 73 34 12  |68d6bf9bc-d54s4.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 31 32 34  33 31 66 32 37 2d 37 65  |m".*$12431f27-7e|
		00000060  34 65 2d 34 31 63 39 2d  38 64 35 34 2d 62 63 37  |4e-41c9-8d54-bc7|
		00000070  62 65 32 30 37 34 62 39  63 32 03 34 32 33 38 00  |be2074b9c2.4238.|
		00000080  42 08 08 96 88 d7 bf 06  10 00 5a 13 0a 07 6b 38  |B.........Z...k8|
		00000090  73 2d 61 70 70 12 08 6b  75 62 65 2d 64 6e 73 5a  |s-app..kube-dnsZ|
		000000a0  1f 0a 11 70 6f 64 2d 74  65 6d 70 6c 61 74 65 2d  |...pod-template-|
		000000b0  68 61 73 68 12 0a 36 36  38 64 36 62 66 39 62 63  |hash..668d6bf9bc|
		000000c0  6a 53 0a 0a 52 65 70 6c  69 63 61 53 65 74 1a 12  |jS..ReplicaSet. [truncated 23609 chars]
	 >
	I0409 00:49:46.923871    2144 type.go:168] "Request Body" body=""
	I0409 00:49:46.923871    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:46.923871    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:46.923871    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:46.923871    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:46.927290    2144 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 00:49:46.927290    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:46.927796    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:46 GMT
	I0409 00:49:46.927796    2144 round_trippers.go:587]     Audit-Id: 4063677a-1379-4b5d-a4b5-166997b34b8b
	I0409 00:49:46.927889    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:46.927950    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:46.927950    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:46.927950    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:46.928547    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 c4 21 0a fc 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..!.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 34 31  37 38 00 42 08 08 8d 88  |34242.4178.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20301 chars]
	 >
	I0409 00:49:47.419887    2144 type.go:168] "Request Body" body=""
	I0409 00:49:47.420253    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-d54s4
	I0409 00:49:47.420253    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:47.420253    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:47.420253    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:47.424162    2144 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 00:49:47.424976    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:47.424976    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:47.424976    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:47.424976    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:47 GMT
	I0409 00:49:47.424976    2144 round_trippers.go:587]     Audit-Id: 01f3cb6c-6f88-438a-a2f1-9e6068903ed0
	I0409 00:49:47.424976    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:47.424976    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:47.425394    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  db 26 0a 8b 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.&.....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 64 35 34 73 34 12  |68d6bf9bc-d54s4.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 31 32 34  33 31 66 32 37 2d 37 65  |m".*$12431f27-7e|
		00000060  34 65 2d 34 31 63 39 2d  38 64 35 34 2d 62 63 37  |4e-41c9-8d54-bc7|
		00000070  62 65 32 30 37 34 62 39  63 32 03 34 32 33 38 00  |be2074b9c2.4238.|
		00000080  42 08 08 96 88 d7 bf 06  10 00 5a 13 0a 07 6b 38  |B.........Z...k8|
		00000090  73 2d 61 70 70 12 08 6b  75 62 65 2d 64 6e 73 5a  |s-app..kube-dnsZ|
		000000a0  1f 0a 11 70 6f 64 2d 74  65 6d 70 6c 61 74 65 2d  |...pod-template-|
		000000b0  68 61 73 68 12 0a 36 36  38 64 36 62 66 39 62 63  |hash..668d6bf9bc|
		000000c0  6a 53 0a 0a 52 65 70 6c  69 63 61 53 65 74 1a 12  |jS..ReplicaSet. [truncated 23609 chars]
	 >
	I0409 00:49:47.425838    2144 type.go:168] "Request Body" body=""
	I0409 00:49:47.425990    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:47.426025    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:47.426075    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:47.426075    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:47.429622    2144 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 00:49:47.429696    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:47.429696    2144 round_trippers.go:587]     Audit-Id: ed9fd4e7-fc36-4d44-bca1-fc8587a3cb2c
	I0409 00:49:47.429696    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:47.429696    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:47.429761    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:47.429761    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:47.429761    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:47 GMT
	I0409 00:49:47.429994    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 c4 21 0a fc 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..!.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 34 31  37 38 00 42 08 08 8d 88  |34242.4178.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20301 chars]
	 >
	I0409 00:49:47.919696    2144 type.go:168] "Request Body" body=""
	I0409 00:49:47.919919    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-d54s4
	I0409 00:49:47.919919    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:47.919919    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:47.919919    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:47.923480    2144 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 00:49:47.923480    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:47.923480    2144 round_trippers.go:587]     Audit-Id: 5104c50c-de31-4330-be7d-89aa07e745f7
	I0409 00:49:47.923480    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:47.923480    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:47.923480    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:47.923480    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:47.923480    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:47 GMT
	I0409 00:49:47.923480    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  db 26 0a 8b 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.&.....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 64 35 34 73 34 12  |68d6bf9bc-d54s4.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 31 32 34  33 31 66 32 37 2d 37 65  |m".*$12431f27-7e|
		00000060  34 65 2d 34 31 63 39 2d  38 64 35 34 2d 62 63 37  |4e-41c9-8d54-bc7|
		00000070  62 65 32 30 37 34 62 39  63 32 03 34 32 33 38 00  |be2074b9c2.4238.|
		00000080  42 08 08 96 88 d7 bf 06  10 00 5a 13 0a 07 6b 38  |B.........Z...k8|
		00000090  73 2d 61 70 70 12 08 6b  75 62 65 2d 64 6e 73 5a  |s-app..kube-dnsZ|
		000000a0  1f 0a 11 70 6f 64 2d 74  65 6d 70 6c 61 74 65 2d  |...pod-template-|
		000000b0  68 61 73 68 12 0a 36 36  38 64 36 62 66 39 62 63  |hash..668d6bf9bc|
		000000c0  6a 53 0a 0a 52 65 70 6c  69 63 61 53 65 74 1a 12  |jS..ReplicaSet. [truncated 23609 chars]
	 >
	I0409 00:49:47.924461    2144 type.go:168] "Request Body" body=""
	I0409 00:49:47.924579    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:47.924579    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:47.924579    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:47.924579    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:47.927469    2144 round_trippers.go:581] Response Status: 200 OK in 1 milliseconds
	I0409 00:49:47.927469    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:47.927469    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:47.927469    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:47.927469    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:47 GMT
	I0409 00:49:47.927469    2144 round_trippers.go:587]     Audit-Id: cd33a8c3-955c-49a4-b513-389fa711129f
	I0409 00:49:47.927469    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:47.927469    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:47.927469    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 c4 21 0a fc 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..!.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 34 31  37 38 00 42 08 08 8d 88  |34242.4178.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20301 chars]
	 >
	I0409 00:49:48.419337    2144 type.go:168] "Request Body" body=""
	I0409 00:49:48.420016    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-d54s4
	I0409 00:49:48.420137    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:48.420137    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:48.420137    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:48.425924    2144 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0409 00:49:48.425991    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:48.425991    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:48.425991    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:48.426056    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:48 GMT
	I0409 00:49:48.426056    2144 round_trippers.go:587]     Audit-Id: e2998047-3c95-4a32-be37-f3a16b4c9095
	I0409 00:49:48.426056    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:48.426078    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:48.426692    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  d4 27 0a ae 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.'.....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 64 35 34 73 34 12  |68d6bf9bc-d54s4.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 31 32 34  33 31 66 32 37 2d 37 65  |m".*$12431f27-7e|
		00000060  34 65 2d 34 31 63 39 2d  38 64 35 34 2d 62 63 37  |4e-41c9-8d54-bc7|
		00000070  62 65 32 30 37 34 62 39  63 32 03 34 33 36 38 00  |be2074b9c2.4368.|
		00000080  42 08 08 96 88 d7 bf 06  10 00 5a 13 0a 07 6b 38  |B.........Z...k8|
		00000090  73 2d 61 70 70 12 08 6b  75 62 65 2d 64 6e 73 5a  |s-app..kube-dnsZ|
		000000a0  1f 0a 11 70 6f 64 2d 74  65 6d 70 6c 61 74 65 2d  |...pod-template-|
		000000b0  68 61 73 68 12 0a 36 36  38 64 36 62 66 39 62 63  |hash..668d6bf9bc|
		000000c0  6a 53 0a 0a 52 65 70 6c  69 63 61 53 65 74 1a 12  |jS..ReplicaSet. [truncated 24171 chars]
	 >
	I0409 00:49:48.426692    2144 type.go:168] "Request Body" body=""
	I0409 00:49:48.426692    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:48.426692    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:48.426692    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:48.426692    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:48.430634    2144 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 00:49:48.430634    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:48.430634    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:48.430634    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:48 GMT
	I0409 00:49:48.430634    2144 round_trippers.go:587]     Audit-Id: 122548df-1211-4968-b683-9ea67b724a86
	I0409 00:49:48.430634    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:48.430634    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:48.430634    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:48.431629    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 c4 21 0a fc 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..!.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 34 31  37 38 00 42 08 08 8d 88  |34242.4178.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20301 chars]
	 >
	I0409 00:49:48.431629    2144 pod_ready.go:93] pod "coredns-668d6bf9bc-d54s4" in "kube-system" namespace has status "Ready":"True"
	I0409 00:49:48.431629    2144 pod_ready.go:82] duration metric: took 2.0126357s for pod "coredns-668d6bf9bc-d54s4" in "kube-system" namespace to be "Ready" ...
	I0409 00:49:48.431629    2144 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-611500" in "kube-system" namespace to be "Ready" ...
	I0409 00:49:48.431629    2144 type.go:168] "Request Body" body=""
	I0409 00:49:48.431629    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-611500
	I0409 00:49:48.431629    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:48.431629    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:48.431629    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:48.435888    2144 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 00:49:48.435888    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:48.435888    2144 round_trippers.go:587]     Audit-Id: bb742c21-38ad-41e7-8e23-9ea2bb8df49f
	I0409 00:49:48.435888    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:48.435888    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:48.435888    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:48.435888    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:48.435888    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:48 GMT
	I0409 00:49:48.436622    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 2b 0a a0 1a 0a 15 65  74 63 64 2d 6d 75 6c 74  |.+.....etcd-mult|
		00000020  69 6e 6f 64 65 2d 36 31  31 35 30 30 12 00 1a 0b  |inode-611500....|
		00000030  6b 75 62 65 2d 73 79 73  74 65 6d 22 00 2a 24 36  |kube-system".*$6|
		00000040  32 32 64 39 61 61 61 2d  31 66 32 66 2d 34 33 35  |22d9aaa-1f2f-435|
		00000050  63 2d 38 63 65 61 2d 62  35 33 62 61 64 62 61 32  |c-8cea-b53badba2|
		00000060  37 66 34 32 03 33 39 35  38 00 42 08 08 90 88 d7  |7f42.3958.B.....|
		00000070  bf 06 10 00 5a 11 0a 09  63 6f 6d 70 6f 6e 65 6e  |....Z...componen|
		00000080  74 12 04 65 74 63 64 5a  15 0a 04 74 69 65 72 12  |t..etcdZ...tier.|
		00000090  0d 63 6f 6e 74 72 6f 6c  2d 70 6c 61 6e 65 62 50  |.control-planebP|
		000000a0  0a 30 6b 75 62 65 61 64  6d 2e 6b 75 62 65 72 6e  |.0kubeadm.kubern|
		000000b0  65 74 65 73 2e 69 6f 2f  65 74 63 64 2e 61 64 76  |etes.io/etcd.adv|
		000000c0  65 72 74 69 73 65 2d 63  6c 69 65 6e 74 2d 75 72  |ertise-client-u [truncated 26543 chars]
	 >
	I0409 00:49:48.436882    2144 type.go:168] "Request Body" body=""
	I0409 00:49:48.436944    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:48.436998    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:48.436998    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:48.436998    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:48.440440    2144 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 00:49:48.440440    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:48.440440    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:48.440440    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:48.440440    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:48.440440    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:48.440440    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:48 GMT
	I0409 00:49:48.440440    2144 round_trippers.go:587]     Audit-Id: ede83758-d84a-4dab-9980-37c3b03a61e3
	I0409 00:49:48.441327    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 c4 21 0a fc 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..!.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 34 31  37 38 00 42 08 08 8d 88  |34242.4178.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20301 chars]
	 >
	I0409 00:49:48.441568    2144 pod_ready.go:93] pod "etcd-multinode-611500" in "kube-system" namespace has status "Ready":"True"
	I0409 00:49:48.441604    2144 pod_ready.go:82] duration metric: took 9.9386ms for pod "etcd-multinode-611500" in "kube-system" namespace to be "Ready" ...
	I0409 00:49:48.441604    2144 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-611500" in "kube-system" namespace to be "Ready" ...
	I0409 00:49:48.441683    2144 type.go:168] "Request Body" body=""
	I0409 00:49:48.441760    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-611500
	I0409 00:49:48.441787    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:48.441787    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:48.441787    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:48.444373    2144 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 00:49:48.445457    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:48.445457    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:48.445457    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:48 GMT
	I0409 00:49:48.445457    2144 round_trippers.go:587]     Audit-Id: 8d6c6303-e519-4028-a528-a4a91d27805a
	I0409 00:49:48.445457    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:48.445457    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:48.445457    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:48.445457    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  99 34 0a b0 1c 0a 1f 6b  75 62 65 2d 61 70 69 73  |.4.....kube-apis|
		00000020  65 72 76 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |erver-multinode-|
		00000030  36 31 31 35 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |611500....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 35 30 31 39 36 37 37  |ystem".*$5019677|
		00000050  35 2d 62 63 30 63 2d 34  31 63 31 2d 62 33 36 63  |5-bc0c-41c1-b36c|
		00000060  2d 31 39 33 36 39 35 64  32 64 62 32 33 32 03 33  |-193695d2db232.3|
		00000070  39 31 38 00 42 08 08 90  88 d7 bf 06 10 00 5a 1b  |918.B.........Z.|
		00000080  0a 09 63 6f 6d 70 6f 6e  65 6e 74 12 0e 6b 75 62  |..component..kub|
		00000090  65 2d 61 70 69 73 65 72  76 65 72 5a 15 0a 04 74  |e-apiserverZ...t|
		000000a0  69 65 72 12 0d 63 6f 6e  74 72 6f 6c 2d 70 6c 61  |ier..control-pla|
		000000b0  6e 65 62 57 0a 3f 6b 75  62 65 61 64 6d 2e 6b 75  |nebW.?kubeadm.ku|
		000000c0  62 65 72 6e 65 74 65 73  2e 69 6f 2f 6b 75 62 65  |bernetes.io/kub [truncated 32076 chars]
	 >
	I0409 00:49:48.445457    2144 type.go:168] "Request Body" body=""
	I0409 00:49:48.445457    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:48.445457    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:48.446144    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:48.446144    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:48.448828    2144 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 00:49:48.448828    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:48.448828    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:48 GMT
	I0409 00:49:48.448828    2144 round_trippers.go:587]     Audit-Id: 514bdd38-ba9d-4ab4-bb9c-e61b1dde480b
	I0409 00:49:48.448828    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:48.448828    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:48.448828    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:48.448828    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:48.449121    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 c4 21 0a fc 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..!.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 34 31  37 38 00 42 08 08 8d 88  |34242.4178.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20301 chars]
	 >
	I0409 00:49:48.449121    2144 pod_ready.go:93] pod "kube-apiserver-multinode-611500" in "kube-system" namespace has status "Ready":"True"
	I0409 00:49:48.449121    2144 pod_ready.go:82] duration metric: took 7.5164ms for pod "kube-apiserver-multinode-611500" in "kube-system" namespace to be "Ready" ...
	I0409 00:49:48.449121    2144 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-611500" in "kube-system" namespace to be "Ready" ...
	I0409 00:49:48.449121    2144 type.go:168] "Request Body" body=""
	I0409 00:49:48.449602    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 00:49:48.449602    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:48.449602    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:48.449716    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:48.452048    2144 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 00:49:48.452048    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:48.452048    2144 round_trippers.go:587]     Audit-Id: c0c52005-83bc-4fc7-8b2c-5c1d13b5598d
	I0409 00:49:48.452048    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:48.452048    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:48.452048    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:48.452048    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:48.452048    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:48 GMT
	I0409 00:49:48.452540    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  f5 30 0a 9b 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.0....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 03  33 38 38 38 00 42 08 08  |ec96062.3888.B..|
		00000080  90 88 d7 bf 06 10 00 5a  24 0a 09 63 6f 6d 70 6f  |.......Z$..compo|
		00000090  6e 65 6e 74 12 17 6b 75  62 65 2d 63 6f 6e 74 72  |nent..kube-contr|
		000000a0  6f 6c 6c 65 72 2d 6d 61  6e 61 67 65 72 5a 15 0a  |oller-managerZ..|
		000000b0  04 74 69 65 72 12 0d 63  6f 6e 74 72 6f 6c 2d 70  |.tier..control-p|
		000000c0  6c 61 6e 65 62 3d 0a 19  6b 75 62 65 72 6e 65 74  |laneb=..kuberne [truncated 30018 chars]
	 >
	I0409 00:49:48.452850    2144 type.go:168] "Request Body" body=""
	I0409 00:49:48.452850    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:48.452850    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:48.452850    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:48.452850    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:48.455374    2144 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 00:49:48.455374    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:48.455374    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:48 GMT
	I0409 00:49:48.455374    2144 round_trippers.go:587]     Audit-Id: 836c86f9-43bd-41b9-9b73-9ce50908c158
	I0409 00:49:48.455374    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:48.455374    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:48.455374    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:48.455374    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:48.457247    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 c4 21 0a fc 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..!.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 34 31  37 38 00 42 08 08 8d 88  |34242.4178.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20301 chars]
	 >
	I0409 00:49:48.457247    2144 pod_ready.go:93] pod "kube-controller-manager-multinode-611500" in "kube-system" namespace has status "Ready":"True"
	I0409 00:49:48.457247    2144 pod_ready.go:82] duration metric: took 8.126ms for pod "kube-controller-manager-multinode-611500" in "kube-system" namespace to be "Ready" ...
	I0409 00:49:48.457247    2144 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zxxgf" in "kube-system" namespace to be "Ready" ...
	I0409 00:49:48.457247    2144 type.go:168] "Request Body" body=""
	I0409 00:49:48.458269    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zxxgf
	I0409 00:49:48.458269    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:48.458312    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:48.458312    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:48.459963    2144 round_trippers.go:581] Response Status: 200 OK in 1 milliseconds
	I0409 00:49:48.460867    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:48.460867    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:48.460867    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:48.460867    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:48.460936    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:48.460936    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:48 GMT
	I0409 00:49:48.460936    2144 round_trippers.go:587]     Audit-Id: 1d42d997-0503-47cd-8e6d-2a148b66e9bf
	I0409 00:49:48.461293    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  a7 25 0a c1 14 0a 10 6b  75 62 65 2d 70 72 6f 78  |.%.....kube-prox|
		00000020  79 2d 7a 78 78 67 66 12  0b 6b 75 62 65 2d 70 72  |y-zxxgf..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 33 35 30  36 65 65 65 37 2d 64 39  |m".*$3506eee7-d9|
		00000050  34 36 2d 34 64 64 65 2d  39 31 63 39 2d 39 66 63  |46-4dde-91c9-9fc|
		00000060  35 63 31 34 37 34 34 33  34 32 03 33 39 32 38 00  |5c14744342.3928.|
		00000070  42 08 08 96 88 d7 bf 06  10 00 5a 26 0a 18 63 6f  |B.........Z&..co|
		00000080  6e 74 72 6f 6c 6c 65 72  2d 72 65 76 69 73 69 6f  |ntroller-revisio|
		00000090  6e 2d 68 61 73 68 12 0a  37 62 62 38 34 63 34 39  |n-hash..7bb84c49|
		000000a0  38 34 5a 15 0a 07 6b 38  73 2d 61 70 70 12 0a 6b  |84Z...k8s-app..k|
		000000b0  75 62 65 2d 70 72 6f 78  79 5a 1c 0a 17 70 6f 64  |ube-proxyZ...pod|
		000000c0  2d 74 65 6d 70 6c 61 74  65 2d 67 65 6e 65 72 61  |-template-gener [truncated 22673 chars]
	 >
	I0409 00:49:48.461434    2144 type.go:168] "Request Body" body=""
	I0409 00:49:48.461615    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:48.461641    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:48.461707    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:48.461707    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:48.462431    2144 round_trippers.go:581] Response Status: 200 OK in 0 milliseconds
	I0409 00:49:48.462431    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:48.462431    2144 round_trippers.go:587]     Audit-Id: 7ea85239-cb3b-480e-8185-65ac9e6cd3f9
	I0409 00:49:48.462431    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:48.462431    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:48.462431    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:48.462431    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:48.462431    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:48 GMT
	I0409 00:49:48.462431    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 c4 21 0a fc 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..!.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 34 31  37 38 00 42 08 08 8d 88  |34242.4178.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20301 chars]
	 >
	I0409 00:49:48.462431    2144 pod_ready.go:93] pod "kube-proxy-zxxgf" in "kube-system" namespace has status "Ready":"True"
	I0409 00:49:48.462431    2144 pod_ready.go:82] duration metric: took 5.1844ms for pod "kube-proxy-zxxgf" in "kube-system" namespace to be "Ready" ...
	I0409 00:49:48.462431    2144 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-611500" in "kube-system" namespace to be "Ready" ...
	I0409 00:49:48.464729    2144 type.go:168] "Request Body" body=""
	I0409 00:49:48.619552    2144 request.go:661] Waited for 154.8216ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.113.157:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-611500
	I0409 00:49:48.619552    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-611500
	I0409 00:49:48.619552    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:48.619552    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:48.619552    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:48.623527    2144 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 00:49:48.623527    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:48.623527    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:48.623527    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:48.623527    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:48.623527    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:48.623527    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:48 GMT
	I0409 00:49:48.623527    2144 round_trippers.go:587]     Audit-Id: 1402d716-6ab3-49a9-82b5-0f0068fdf939
	I0409 00:49:48.623985    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  80 23 0a 83 18 0a 1f 6b  75 62 65 2d 73 63 68 65  |.#.....kube-sche|
		00000020  64 75 6c 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |duler-multinode-|
		00000030  36 31 31 35 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |611500....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 39 31 38 35 64 35 63  |ystem".*$9185d5c|
		00000050  30 2d 62 32 38 61 2d 34  33 38 63 2d 62 30 35 61  |0-b28a-438c-b05a|
		00000060  2d 36 34 36 36 37 65 34  61 63 33 64 37 32 03 33  |-64667e4ac3d72.3|
		00000070  38 33 38 00 42 08 08 90  88 d7 bf 06 10 00 5a 1b  |838.B.........Z.|
		00000080  0a 09 63 6f 6d 70 6f 6e  65 6e 74 12 0e 6b 75 62  |..component..kub|
		00000090  65 2d 73 63 68 65 64 75  6c 65 72 5a 15 0a 04 74  |e-schedulerZ...t|
		000000a0  69 65 72 12 0d 63 6f 6e  74 72 6f 6c 2d 70 6c 61  |ier..control-pla|
		000000b0  6e 65 62 3d 0a 19 6b 75  62 65 72 6e 65 74 65 73  |neb=..kubernetes|
		000000c0  2e 69 6f 2f 63 6f 6e 66  69 67 2e 68 61 73 68 12  |.io/config.hash [truncated 21244 chars]
	 >
	I0409 00:49:48.623985    2144 type.go:168] "Request Body" body=""
	I0409 00:49:48.820314    2144 request.go:661] Waited for 196.326ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:48.820314    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:49:48.820314    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:48.820314    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:48.820314    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:48.825911    2144 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0409 00:49:48.825911    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:48.825988    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:48.825988    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:48.825988    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:48.825988    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:48.825988    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:48 GMT
	I0409 00:49:48.825988    2144 round_trippers.go:587]     Audit-Id: d1d41c13-375d-4cd6-a5ef-47a4a6a2b072
	I0409 00:49:48.826074    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 c4 21 0a fc 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..!.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 34 31  37 38 00 42 08 08 8d 88  |34242.4178.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20301 chars]
	 >
	I0409 00:49:48.826074    2144 pod_ready.go:93] pod "kube-scheduler-multinode-611500" in "kube-system" namespace has status "Ready":"True"
	I0409 00:49:48.826611    2144 pod_ready.go:82] duration metric: took 364.1751ms for pod "kube-scheduler-multinode-611500" in "kube-system" namespace to be "Ready" ...
	I0409 00:49:48.826611    2144 pod_ready.go:39] duration metric: took 2.4180727s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0409 00:49:48.826793    2144 api_server.go:52] waiting for apiserver process to appear ...
	I0409 00:49:48.840884    2144 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0409 00:49:48.869620    2144 command_runner.go:130] > 2104
	I0409 00:49:48.869769    2144 api_server.go:72] duration metric: took 22.9330614s to wait for apiserver process to appear ...
	I0409 00:49:48.869769    2144 api_server.go:88] waiting for apiserver healthz status ...
	I0409 00:49:48.869829    2144 api_server.go:253] Checking apiserver healthz at https://192.168.113.157:8443/healthz ...
	I0409 00:49:48.881869    2144 api_server.go:279] https://192.168.113.157:8443/healthz returned 200:
	ok
	I0409 00:49:48.881869    2144 discovery_client.go:658] "Request Body" body=""
	I0409 00:49:48.881869    2144 round_trippers.go:470] GET https://192.168.113.157:8443/version
	I0409 00:49:48.881869    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:48.881869    2144 round_trippers.go:480]     Accept: application/json, */*
	I0409 00:49:48.881869    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:48.884859    2144 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 00:49:48.884859    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:48.884859    2144 round_trippers.go:587]     Content-Length: 263
	I0409 00:49:48.884859    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:48 GMT
	I0409 00:49:48.884859    2144 round_trippers.go:587]     Audit-Id: 761aa945-0ec1-48e3-8e2c-e72e698e47a6
	I0409 00:49:48.884859    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:48.884859    2144 round_trippers.go:587]     Content-Type: application/json
	I0409 00:49:48.884859    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:48.884859    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:48.884859    2144 discovery_client.go:658] "Response Body" body=<
		{
		  "major": "1",
		  "minor": "32",
		  "gitVersion": "v1.32.2",
		  "gitCommit": "67a30c0adcf52bd3f56ff0893ce19966be12991f",
		  "gitTreeState": "clean",
		  "buildDate": "2025-02-12T21:19:47Z",
		  "goVersion": "go1.23.6",
		  "compiler": "gc",
		  "platform": "linux/amd64"
		}
	 >
	I0409 00:49:48.884859    2144 api_server.go:141] control plane version: v1.32.2
	I0409 00:49:48.884859    2144 api_server.go:131] duration metric: took 15.0898ms to wait for apiserver health ...
	I0409 00:49:48.884859    2144 system_pods.go:43] waiting for kube-system pods to appear ...
	I0409 00:49:48.884859    2144 type.go:204] "Request Body" body=""
	I0409 00:49:49.019850    2144 request.go:661] Waited for 134.9901ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.113.157:8443/api/v1/namespaces/kube-system/pods
	I0409 00:49:49.019850    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/namespaces/kube-system/pods
	I0409 00:49:49.019850    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:49.019850    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:49.020322    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:49.024722    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:49:49.024793    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:49.024793    2144 round_trippers.go:587]     Audit-Id: f113f748-0ddd-42ff-b042-104a01a278e5
	I0409 00:49:49.024793    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:49.024858    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:49.024858    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:49.024858    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:49.024858    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:49 GMT
	I0409 00:49:49.026648    2144 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 da c7 02 0a  09 0a 00 12 03 34 34 31  |ist..........441|
		00000020  1a 00 12 d4 27 0a ae 19  0a 18 63 6f 72 65 64 6e  |....'.....coredn|
		00000030  73 2d 36 36 38 64 36 62  66 39 62 63 2d 64 35 34  |s-668d6bf9bc-d54|
		00000040  73 34 12 13 63 6f 72 65  64 6e 73 2d 36 36 38 64  |s4..coredns-668d|
		00000050  36 62 66 39 62 63 2d 1a  0b 6b 75 62 65 2d 73 79  |6bf9bc-..kube-sy|
		00000060  73 74 65 6d 22 00 2a 24  31 32 34 33 31 66 32 37  |stem".*$12431f27|
		00000070  2d 37 65 34 65 2d 34 31  63 39 2d 38 64 35 34 2d  |-7e4e-41c9-8d54-|
		00000080  62 63 37 62 65 32 30 37  34 62 39 63 32 03 34 33  |bc7be2074b9c2.43|
		00000090  36 38 00 42 08 08 96 88  d7 bf 06 10 00 5a 13 0a  |68.B.........Z..|
		000000a0  07 6b 38 73 2d 61 70 70  12 08 6b 75 62 65 2d 64  |.k8s-app..kube-d|
		000000b0  6e 73 5a 1f 0a 11 70 6f  64 2d 74 65 6d 70 6c 61  |nsZ...pod-templa|
		000000c0  74 65 2d 68 61 73 68 12  0a 36 36 38 64 36 62 66  |te-hash..668d6b [truncated 206261 chars]
	 >
	I0409 00:49:49.027691    2144 system_pods.go:59] 8 kube-system pods found
	I0409 00:49:49.027766    2144 system_pods.go:61] "coredns-668d6bf9bc-d54s4" [12431f27-7e4e-41c9-8d54-bc7be2074b9c] Running
	I0409 00:49:49.027766    2144 system_pods.go:61] "etcd-multinode-611500" [622d9aaa-1f2f-435c-8cea-b53badba27f4] Running
	I0409 00:49:49.027766    2144 system_pods.go:61] "kindnet-vntlr" [2e088361-08c9-4325-8241-20f5f443dcf6] Running
	I0409 00:49:49.027766    2144 system_pods.go:61] "kube-apiserver-multinode-611500" [50196775-bc0c-41c1-b36c-193695d2db23] Running
	I0409 00:49:49.027766    2144 system_pods.go:61] "kube-controller-manager-multinode-611500" [75af0b90-6c72-4624-8660-aa943fec9606] Running
	I0409 00:49:49.027766    2144 system_pods.go:61] "kube-proxy-zxxgf" [3506eee7-d946-4dde-91c9-9fc5c1474434] Running
	I0409 00:49:49.027766    2144 system_pods.go:61] "kube-scheduler-multinode-611500" [9185d5c0-b28a-438c-b05a-64667e4ac3d7] Running
	I0409 00:49:49.027866    2144 system_pods.go:61] "storage-provisioner" [8f7ea37f-c3a7-44fc-ac99-c184b674aca3] Running
	I0409 00:49:49.027866    2144 system_pods.go:74] duration metric: took 143.0059ms to wait for pod list to return data ...
	I0409 00:49:49.027911    2144 default_sa.go:34] waiting for default service account to be created ...
	I0409 00:49:49.028011    2144 type.go:204] "Request Body" body=""
	I0409 00:49:49.224313    2144 request.go:661] Waited for 196.1804ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.113.157:8443/api/v1/namespaces/default/serviceaccounts
	I0409 00:49:49.224313    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/namespaces/default/serviceaccounts
	I0409 00:49:49.224606    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:49.224606    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:49.224606    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:49.242098    2144 round_trippers.go:581] Response Status: 200 OK in 16 milliseconds
	I0409 00:49:49.242127    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:49.242127    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:49.242127    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:49.242127    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:49.242127    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:49.242127    2144 round_trippers.go:587]     Content-Length: 128
	I0409 00:49:49.242127    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:49 GMT
	I0409 00:49:49.242127    2144 round_trippers.go:587]     Audit-Id: c13a73cc-3036-4ca6-a77a-7f544f805db7
	I0409 00:49:49.242127    2144 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 18 0a 02  76 31 12 12 53 65 72 76  |k8s.....v1..Serv|
		00000010  69 63 65 41 63 63 6f 75  6e 74 4c 69 73 74 12 5c  |iceAccountList.\|
		00000020  0a 09 0a 00 12 03 34 34  32 1a 00 12 4f 0a 4d 0a  |......442...O.M.|
		00000030  07 64 65 66 61 75 6c 74  12 00 1a 07 64 65 66 61  |.default....defa|
		00000040  75 6c 74 22 00 2a 24 35  65 63 37 63 31 66 66 2d  |ult".*$5ec7c1ff-|
		00000050  31 63 66 31 2d 34 64 30  32 2d 38 61 65 33 2d 35  |1cf1-4d02-8ae3-5|
		00000060  62 66 35 65 30 39 65 66  33 37 37 32 03 33 32 36  |bf5e09ef3772.326|
		00000070  38 00 42 08 08 95 88 d7  bf 06 10 00 1a 00 22 00  |8.B...........".|
	 >
	I0409 00:49:49.242324    2144 default_sa.go:45] found service account: "default"
	I0409 00:49:49.242409    2144 default_sa.go:55] duration metric: took 214.4949ms for default service account to be created ...
	I0409 00:49:49.242409    2144 system_pods.go:116] waiting for k8s-apps to be running ...
	I0409 00:49:49.242504    2144 type.go:204] "Request Body" body=""
	I0409 00:49:49.419846    2144 request.go:661] Waited for 177.3397ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.113.157:8443/api/v1/namespaces/kube-system/pods
	I0409 00:49:49.419846    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/namespaces/kube-system/pods
	I0409 00:49:49.419846    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:49.420435    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:49.420435    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:49.423389    2144 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 00:49:49.423389    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:49.423389    2144 round_trippers.go:587]     Audit-Id: b34c100f-d95d-4a03-b3af-44eec90633aa
	I0409 00:49:49.423389    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:49.423389    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:49.423389    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:49.423389    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:49.423516    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:49 GMT
	I0409 00:49:49.426217    2144 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 da c7 02 0a  09 0a 00 12 03 34 34 32  |ist..........442|
		00000020  1a 00 12 d4 27 0a ae 19  0a 18 63 6f 72 65 64 6e  |....'.....coredn|
		00000030  73 2d 36 36 38 64 36 62  66 39 62 63 2d 64 35 34  |s-668d6bf9bc-d54|
		00000040  73 34 12 13 63 6f 72 65  64 6e 73 2d 36 36 38 64  |s4..coredns-668d|
		00000050  36 62 66 39 62 63 2d 1a  0b 6b 75 62 65 2d 73 79  |6bf9bc-..kube-sy|
		00000060  73 74 65 6d 22 00 2a 24  31 32 34 33 31 66 32 37  |stem".*$12431f27|
		00000070  2d 37 65 34 65 2d 34 31  63 39 2d 38 64 35 34 2d  |-7e4e-41c9-8d54-|
		00000080  62 63 37 62 65 32 30 37  34 62 39 63 32 03 34 33  |bc7be2074b9c2.43|
		00000090  36 38 00 42 08 08 96 88  d7 bf 06 10 00 5a 13 0a  |68.B.........Z..|
		000000a0  07 6b 38 73 2d 61 70 70  12 08 6b 75 62 65 2d 64  |.k8s-app..kube-d|
		000000b0  6e 73 5a 1f 0a 11 70 6f  64 2d 74 65 6d 70 6c 61  |nsZ...pod-templa|
		000000c0  74 65 2d 68 61 73 68 12  0a 36 36 38 64 36 62 66  |te-hash..668d6b [truncated 206261 chars]
	 >
	I0409 00:49:49.426805    2144 system_pods.go:86] 8 kube-system pods found
	I0409 00:49:49.426805    2144 system_pods.go:89] "coredns-668d6bf9bc-d54s4" [12431f27-7e4e-41c9-8d54-bc7be2074b9c] Running
	I0409 00:49:49.426805    2144 system_pods.go:89] "etcd-multinode-611500" [622d9aaa-1f2f-435c-8cea-b53badba27f4] Running
	I0409 00:49:49.426805    2144 system_pods.go:89] "kindnet-vntlr" [2e088361-08c9-4325-8241-20f5f443dcf6] Running
	I0409 00:49:49.426805    2144 system_pods.go:89] "kube-apiserver-multinode-611500" [50196775-bc0c-41c1-b36c-193695d2db23] Running
	I0409 00:49:49.426805    2144 system_pods.go:89] "kube-controller-manager-multinode-611500" [75af0b90-6c72-4624-8660-aa943fec9606] Running
	I0409 00:49:49.426805    2144 system_pods.go:89] "kube-proxy-zxxgf" [3506eee7-d946-4dde-91c9-9fc5c1474434] Running
	I0409 00:49:49.426805    2144 system_pods.go:89] "kube-scheduler-multinode-611500" [9185d5c0-b28a-438c-b05a-64667e4ac3d7] Running
	I0409 00:49:49.426805    2144 system_pods.go:89] "storage-provisioner" [8f7ea37f-c3a7-44fc-ac99-c184b674aca3] Running
	I0409 00:49:49.426805    2144 system_pods.go:126] duration metric: took 184.3258ms to wait for k8s-apps to be running ...
	I0409 00:49:49.426805    2144 system_svc.go:44] waiting for kubelet service to be running ....
	I0409 00:49:49.437963    2144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0409 00:49:49.460788    2144 system_svc.go:56] duration metric: took 33.7328ms WaitForService to wait for kubelet
	I0409 00:49:49.460788    2144 kubeadm.go:582] duration metric: took 23.5242218s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0409 00:49:49.460788    2144 node_conditions.go:102] verifying NodePressure condition ...
	I0409 00:49:49.460936    2144 type.go:204] "Request Body" body=""
	I0409 00:49:49.619463    2144 request.go:661] Waited for 158.4752ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.113.157:8443/api/v1/nodes
	I0409 00:49:49.619463    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes
	I0409 00:49:49.619463    2144 round_trippers.go:476] Request Headers:
	I0409 00:49:49.619463    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:49:49.619463    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:49:49.623887    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:49:49.623966    2144 round_trippers.go:584] Response Headers:
	I0409 00:49:49.624015    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:49:49.624015    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:49:49 GMT
	I0409 00:49:49.624015    2144 round_trippers.go:587]     Audit-Id: cf2c4c82-de01-4232-9740-fe3f79242361
	I0409 00:49:49.624015    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:49:49.624045    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:49:49.624045    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:49:49.624414    2144 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0e 0a 02  76 31 12 08 4e 6f 64 65  |k8s.....v1..Node|
		00000010  4c 69 73 74 12 d2 21 0a  09 0a 00 12 03 34 34 32  |List..!......442|
		00000020  1a 00 12 c4 21 0a fc 10  0a 10 6d 75 6c 74 69 6e  |....!.....multin|
		00000030  6f 64 65 2d 36 31 31 35  30 30 12 00 1a 00 22 00  |ode-611500....".|
		00000040  2a 24 62 31 32 35 32 66  34 61 2d 32 32 33 30 2d  |*$b1252f4a-2230-|
		00000050  34 36 61 36 2d 39 33 38  62 2d 37 63 30 37 31 31  |46a6-938b-7c0711|
		00000060  31 33 33 34 32 34 32 03  34 31 37 38 00 42 08 08  |1334242.4178.B..|
		00000070  8d 88 d7 bf 06 10 00 5a  20 0a 17 62 65 74 61 2e  |.......Z ..beta.|
		00000080  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		00000090  63 68 12 05 61 6d 64 36  34 5a 1e 0a 15 62 65 74  |ch..amd64Z...bet|
		000000a0  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		000000b0  6f 73 12 05 6c 69 6e 75  78 5a 1b 0a 12 6b 75 62  |os..linuxZ...kub|
		000000c0  65 72 6e 65 74 65 73 2e  69 6f 2f 61 72 63 68 12  |ernetes.io/arch [truncated 20382 chars]
	 >
	I0409 00:49:49.624605    2144 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0409 00:49:49.624742    2144 node_conditions.go:123] node cpu capacity is 2
	I0409 00:49:49.624804    2144 node_conditions.go:105] duration metric: took 164.0136ms to run NodePressure ...
	I0409 00:49:49.624855    2144 start.go:241] waiting for startup goroutines ...
	I0409 00:49:49.624855    2144 start.go:246] waiting for cluster config update ...
	I0409 00:49:49.624855    2144 start.go:255] writing updated cluster config ...
	I0409 00:49:49.629941    2144 out.go:201] 
	I0409 00:49:49.633901    2144 config.go:182] Loaded profile config "ha-061400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0409 00:49:49.642858    2144 config.go:182] Loaded profile config "multinode-611500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0409 00:49:49.643832    2144 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\config.json ...
	I0409 00:49:49.649996    2144 out.go:177] * Starting "multinode-611500-m02" worker node in "multinode-611500" cluster
	I0409 00:49:49.654288    2144 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0409 00:49:49.654288    2144 cache.go:56] Caching tarball of preloaded images
	I0409 00:49:49.654288    2144 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0409 00:49:49.655040    2144 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0409 00:49:49.655152    2144 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\config.json ...
	I0409 00:49:49.659830    2144 start.go:360] acquireMachinesLock for multinode-611500-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0409 00:49:49.659830    2144 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-611500-m02"
	I0409 00:49:49.659830    2144 start.go:93] Provisioning new machine with config: &{Name:multinode-611500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 Clus
terName:multinode-611500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.113.157 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0409 00:49:49.659830    2144 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0409 00:49:49.663068    2144 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0409 00:49:49.663068    2144 start.go:159] libmachine.API.Create for "multinode-611500" (driver="hyperv")
	I0409 00:49:49.663068    2144 client.go:168] LocalClient.Create starting
	I0409 00:49:49.664084    2144 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0409 00:49:49.664084    2144 main.go:141] libmachine: Decoding PEM data...
	I0409 00:49:49.664084    2144 main.go:141] libmachine: Parsing certificate...
	I0409 00:49:49.664712    2144 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0409 00:49:49.664932    2144 main.go:141] libmachine: Decoding PEM data...
	I0409 00:49:49.664932    2144 main.go:141] libmachine: Parsing certificate...
	I0409 00:49:49.665159    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0409 00:49:51.555807    2144 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0409 00:49:51.555901    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:49:51.555901    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0409 00:49:53.277795    2144 main.go:141] libmachine: [stdout =====>] : False
	
	I0409 00:49:53.278296    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:49:53.278367    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0409 00:49:54.774603    2144 main.go:141] libmachine: [stdout =====>] : True
	
	I0409 00:49:54.774603    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:49:54.774727    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0409 00:49:58.384857    2144 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0409 00:49:58.384857    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:49:58.387124    2144 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0409 00:49:58.889828    2144 main.go:141] libmachine: Creating SSH key...
	I0409 00:49:59.748296    2144 main.go:141] libmachine: Creating VM...
	I0409 00:49:59.749222    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0409 00:50:02.632255    2144 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0409 00:50:02.632255    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:50:02.632349    2144 main.go:141] libmachine: Using switch "Default Switch"
	I0409 00:50:02.632453    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0409 00:50:04.371288    2144 main.go:141] libmachine: [stdout =====>] : True
	
	I0409 00:50:04.372289    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:50:04.372462    2144 main.go:141] libmachine: Creating VHD
	I0409 00:50:04.372525    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-611500-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0409 00:50:08.136509    2144 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-611500-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 55806535-FED5-4C4B-9557-8EA26549A509
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0409 00:50:08.136509    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:50:08.137875    2144 main.go:141] libmachine: Writing magic tar header
	I0409 00:50:08.137875    2144 main.go:141] libmachine: Writing SSH key tar header
	I0409 00:50:08.152935    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-611500-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-611500-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0409 00:50:11.356538    2144 main.go:141] libmachine: [stdout =====>] : 
	I0409 00:50:11.357315    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:50:11.357409    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-611500-m02\disk.vhd' -SizeBytes 20000MB
	I0409 00:50:13.936120    2144 main.go:141] libmachine: [stdout =====>] : 
	I0409 00:50:13.936238    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:50:13.936238    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-611500-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-611500-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0409 00:50:17.655734    2144 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-611500-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0409 00:50:17.656452    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:50:17.656531    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-611500-m02 -DynamicMemoryEnabled $false
	I0409 00:50:19.984114    2144 main.go:141] libmachine: [stdout =====>] : 
	I0409 00:50:19.984114    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:50:19.984315    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-611500-m02 -Count 2
	I0409 00:50:22.222574    2144 main.go:141] libmachine: [stdout =====>] : 
	I0409 00:50:22.222574    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:50:22.222785    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-611500-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-611500-m02\boot2docker.iso'
	I0409 00:50:24.829905    2144 main.go:141] libmachine: [stdout =====>] : 
	I0409 00:50:24.829984    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:50:24.829984    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-611500-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-611500-m02\disk.vhd'
	I0409 00:50:27.512184    2144 main.go:141] libmachine: [stdout =====>] : 
	I0409 00:50:27.512356    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:50:27.512356    2144 main.go:141] libmachine: Starting VM...
	I0409 00:50:27.512456    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-611500-m02
	I0409 00:50:30.691695    2144 main.go:141] libmachine: [stdout =====>] : 
	I0409 00:50:30.691695    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:50:30.692426    2144 main.go:141] libmachine: Waiting for host to start...
	I0409 00:50:30.692426    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 00:50:33.028322    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:50:33.029234    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:50:33.029342    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 00:50:35.577096    2144 main.go:141] libmachine: [stdout =====>] : 
	I0409 00:50:35.577560    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:50:36.578076    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 00:50:38.888915    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:50:38.889960    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:50:38.889960    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 00:50:41.421197    2144 main.go:141] libmachine: [stdout =====>] : 
	I0409 00:50:41.421197    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:50:42.422403    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 00:50:44.632869    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:50:44.632869    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:50:44.632869    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 00:50:47.222078    2144 main.go:141] libmachine: [stdout =====>] : 
	I0409 00:50:47.222078    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:50:48.222128    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 00:50:50.483937    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:50:50.484333    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:50:50.484333    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 00:50:53.020407    2144 main.go:141] libmachine: [stdout =====>] : 
	I0409 00:50:53.021357    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:50:54.021509    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 00:50:56.296464    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:50:56.296913    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:50:56.296913    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 00:50:58.857897    2144 main.go:141] libmachine: [stdout =====>] : 192.168.113.143
	
	I0409 00:50:58.858121    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:50:58.858218    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 00:51:00.997932    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:51:00.997932    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:51:00.998952    2144 machine.go:93] provisionDockerMachine start ...
	I0409 00:51:00.999002    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 00:51:03.176743    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:51:03.176743    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:51:03.176743    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 00:51:05.751335    2144 main.go:141] libmachine: [stdout =====>] : 192.168.113.143
	
	I0409 00:51:05.751444    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:51:05.757927    2144 main.go:141] libmachine: Using SSH client type: native
	I0409 00:51:05.772645    2144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.143 22 <nil> <nil>}
	I0409 00:51:05.772645    2144 main.go:141] libmachine: About to run SSH command:
	hostname
	I0409 00:51:05.912800    2144 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0409 00:51:05.912878    2144 buildroot.go:166] provisioning hostname "multinode-611500-m02"
	I0409 00:51:05.912935    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 00:51:08.031764    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:51:08.032052    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:51:08.032052    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 00:51:10.533754    2144 main.go:141] libmachine: [stdout =====>] : 192.168.113.143
	
	I0409 00:51:10.534842    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:51:10.539992    2144 main.go:141] libmachine: Using SSH client type: native
	I0409 00:51:10.540299    2144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.143 22 <nil> <nil>}
	I0409 00:51:10.540299    2144 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-611500-m02 && echo "multinode-611500-m02" | sudo tee /etc/hostname
	I0409 00:51:10.707067    2144 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-611500-m02
	
	I0409 00:51:10.707132    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 00:51:12.878222    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:51:12.878364    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:51:12.878364    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 00:51:15.451803    2144 main.go:141] libmachine: [stdout =====>] : 192.168.113.143
	
	I0409 00:51:15.451803    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:51:15.457541    2144 main.go:141] libmachine: Using SSH client type: native
	I0409 00:51:15.458242    2144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.143 22 <nil> <nil>}
	I0409 00:51:15.458242    2144 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-611500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-611500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-611500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0409 00:51:15.616319    2144 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0409 00:51:15.616319    2144 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0409 00:51:15.616319    2144 buildroot.go:174] setting up certificates
	I0409 00:51:15.616319    2144 provision.go:84] configureAuth start
	I0409 00:51:15.616319    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 00:51:17.824672    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:51:17.824672    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:51:17.824672    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 00:51:20.419009    2144 main.go:141] libmachine: [stdout =====>] : 192.168.113.143
	
	I0409 00:51:20.419634    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:51:20.419634    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 00:51:22.677726    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:51:22.677726    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:51:22.677826    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 00:51:25.328352    2144 main.go:141] libmachine: [stdout =====>] : 192.168.113.143
	
	I0409 00:51:25.329299    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:51:25.329299    2144 provision.go:143] copyHostCerts
	I0409 00:51:25.329299    2144 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0409 00:51:25.329299    2144 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0409 00:51:25.329299    2144 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0409 00:51:25.330081    2144 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0409 00:51:25.331365    2144 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0409 00:51:25.331627    2144 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0409 00:51:25.331718    2144 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0409 00:51:25.331970    2144 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0409 00:51:25.332763    2144 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0409 00:51:25.332763    2144 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0409 00:51:25.332763    2144 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0409 00:51:25.333413    2144 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0409 00:51:25.334421    2144 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-611500-m02 san=[127.0.0.1 192.168.113.143 localhost minikube multinode-611500-m02]
	I0409 00:51:25.500456    2144 provision.go:177] copyRemoteCerts
	I0409 00:51:25.508145    2144 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0409 00:51:25.508145    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 00:51:27.733178    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:51:27.733178    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:51:27.733541    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 00:51:30.362629    2144 main.go:141] libmachine: [stdout =====>] : 192.168.113.143
	
	I0409 00:51:30.362629    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:51:30.364113    2144 sshutil.go:53] new ssh client: &{IP:192.168.113.143 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-611500-m02\id_rsa Username:docker}
	I0409 00:51:30.477317    2144 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9689886s)
	I0409 00:51:30.477317    2144 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0409 00:51:30.477317    2144 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0409 00:51:30.536791    2144 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0409 00:51:30.536989    2144 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0409 00:51:30.593444    2144 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0409 00:51:30.593444    2144 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0409 00:51:30.649349    2144 provision.go:87] duration metric: took 15.0328317s to configureAuth
	I0409 00:51:30.649456    2144 buildroot.go:189] setting minikube options for container-runtime
	I0409 00:51:30.649635    2144 config.go:182] Loaded profile config "multinode-611500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0409 00:51:30.649635    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 00:51:32.798518    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:51:32.798518    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:51:32.798600    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 00:51:35.438711    2144 main.go:141] libmachine: [stdout =====>] : 192.168.113.143
	
	I0409 00:51:35.438711    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:51:35.443189    2144 main.go:141] libmachine: Using SSH client type: native
	I0409 00:51:35.443920    2144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.143 22 <nil> <nil>}
	I0409 00:51:35.443920    2144 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0409 00:51:35.590219    2144 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0409 00:51:35.590219    2144 buildroot.go:70] root file system type: tmpfs
	I0409 00:51:35.590741    2144 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0409 00:51:35.590811    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 00:51:37.762720    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:51:37.763111    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:51:37.763221    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 00:51:40.395283    2144 main.go:141] libmachine: [stdout =====>] : 192.168.113.143
	
	I0409 00:51:40.395283    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:51:40.401155    2144 main.go:141] libmachine: Using SSH client type: native
	I0409 00:51:40.401914    2144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.143 22 <nil> <nil>}
	I0409 00:51:40.402218    2144 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.113.157"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0409 00:51:40.564173    2144 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.113.157
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0409 00:51:40.564264    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 00:51:42.757671    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:51:42.757671    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:51:42.758486    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 00:51:45.337683    2144 main.go:141] libmachine: [stdout =====>] : 192.168.113.143
	
	I0409 00:51:45.337683    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:51:45.344778    2144 main.go:141] libmachine: Using SSH client type: native
	I0409 00:51:45.345026    2144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.143 22 <nil> <nil>}
	I0409 00:51:45.345026    2144 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0409 00:51:47.586398    2144 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0409 00:51:47.586431    2144 machine.go:96] duration metric: took 46.5868641s to provisionDockerMachine
	I0409 00:51:47.586484    2144 client.go:171] duration metric: took 1m57.9218539s to LocalClient.Create
	I0409 00:51:47.586517    2144 start.go:167] duration metric: took 1m57.9218869s to libmachine.API.Create "multinode-611500"
	I0409 00:51:47.586562    2144 start.go:293] postStartSetup for "multinode-611500-m02" (driver="hyperv")
	I0409 00:51:47.586595    2144 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0409 00:51:47.598982    2144 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0409 00:51:47.598982    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 00:51:49.752960    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:51:49.753795    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:51:49.753863    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 00:51:52.334604    2144 main.go:141] libmachine: [stdout =====>] : 192.168.113.143
	
	I0409 00:51:52.334604    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:51:52.335921    2144 sshutil.go:53] new ssh client: &{IP:192.168.113.143 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-611500-m02\id_rsa Username:docker}
	I0409 00:51:52.439548    2144 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8405019s)
	I0409 00:51:52.451482    2144 ssh_runner.go:195] Run: cat /etc/os-release
	I0409 00:51:52.461030    2144 command_runner.go:130] > NAME=Buildroot
	I0409 00:51:52.461030    2144 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0409 00:51:52.461148    2144 command_runner.go:130] > ID=buildroot
	I0409 00:51:52.461148    2144 command_runner.go:130] > VERSION_ID=2023.02.9
	I0409 00:51:52.461148    2144 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0409 00:51:52.461148    2144 info.go:137] Remote host: Buildroot 2023.02.9
	I0409 00:51:52.461148    2144 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0409 00:51:52.461948    2144 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0409 00:51:52.463625    2144 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> 98642.pem in /etc/ssl/certs
	I0409 00:51:52.463673    2144 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> /etc/ssl/certs/98642.pem
	I0409 00:51:52.475429    2144 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0409 00:51:52.493015    2144 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem --> /etc/ssl/certs/98642.pem (1708 bytes)
	I0409 00:51:52.537662    2144 start.go:296] duration metric: took 4.951002s for postStartSetup
	I0409 00:51:52.540815    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 00:51:54.675787    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:51:54.675787    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:51:54.675787    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 00:51:57.263518    2144 main.go:141] libmachine: [stdout =====>] : 192.168.113.143
	
	I0409 00:51:57.263518    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:51:57.264573    2144 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\config.json ...
	I0409 00:51:57.266933    2144 start.go:128] duration metric: took 2m7.6054129s to createHost
	I0409 00:51:57.266933    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 00:51:59.438687    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:51:59.438687    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:51:59.438687    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 00:52:01.990544    2144 main.go:141] libmachine: [stdout =====>] : 192.168.113.143
	
	I0409 00:52:01.990544    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:52:01.998866    2144 main.go:141] libmachine: Using SSH client type: native
	I0409 00:52:01.999618    2144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.143 22 <nil> <nil>}
	I0409 00:52:01.999618    2144 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0409 00:52:02.135734    2144 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744159922.157939990
	
	I0409 00:52:02.135838    2144 fix.go:216] guest clock: 1744159922.157939990
	I0409 00:52:02.135926    2144 fix.go:229] Guest: 2025-04-09 00:52:02.15793999 +0000 UTC Remote: 2025-04-09 00:51:57.2669331 +0000 UTC m=+339.573045901 (delta=4.89100689s)
	I0409 00:52:02.135926    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 00:52:04.240544    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:52:04.241742    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:52:04.241742    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 00:52:06.795240    2144 main.go:141] libmachine: [stdout =====>] : 192.168.113.143
	
	I0409 00:52:06.795240    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:52:06.801069    2144 main.go:141] libmachine: Using SSH client type: native
	I0409 00:52:06.801911    2144 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.113.143 22 <nil> <nil>}
	I0409 00:52:06.801911    2144 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1744159922
	I0409 00:52:06.948638    2144 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Apr  9 00:52:02 UTC 2025
	
	I0409 00:52:06.948638    2144 fix.go:236] clock set: Wed Apr  9 00:52:02 UTC 2025
	 (err=<nil>)
	I0409 00:52:06.948638    2144 start.go:83] releasing machines lock for "multinode-611500-m02", held for 2m17.2869906s
	I0409 00:52:06.949215    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 00:52:09.080921    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:52:09.081010    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:52:09.081010    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 00:52:11.596568    2144 main.go:141] libmachine: [stdout =====>] : 192.168.113.143
	
	I0409 00:52:11.596568    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:52:11.601014    2144 out.go:177] * Found network options:
	I0409 00:52:11.603744    2144 out.go:177]   - NO_PROXY=192.168.113.157
	W0409 00:52:11.606836    2144 proxy.go:119] fail to check proxy env: Error ip not in block
	I0409 00:52:11.609106    2144 out.go:177]   - NO_PROXY=192.168.113.157
	W0409 00:52:11.611497    2144 proxy.go:119] fail to check proxy env: Error ip not in block
	W0409 00:52:11.613543    2144 proxy.go:119] fail to check proxy env: Error ip not in block
	I0409 00:52:11.617111    2144 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0409 00:52:11.617170    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 00:52:11.626520    2144 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0409 00:52:11.626520    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 00:52:13.837176    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:52:13.837176    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:52:13.837176    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 00:52:13.861074    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:52:13.861370    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:52:13.861619    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 00:52:16.544508    2144 main.go:141] libmachine: [stdout =====>] : 192.168.113.143
	
	I0409 00:52:16.545499    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:52:16.545536    2144 sshutil.go:53] new ssh client: &{IP:192.168.113.143 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-611500-m02\id_rsa Username:docker}
	I0409 00:52:16.569541    2144 main.go:141] libmachine: [stdout =====>] : 192.168.113.143
	
	I0409 00:52:16.569541    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:52:16.570673    2144 sshutil.go:53] new ssh client: &{IP:192.168.113.143 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-611500-m02\id_rsa Username:docker}
	I0409 00:52:16.640752    2144 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0409 00:52:16.642051    2144 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0154648s)
	W0409 00:52:16.642051    2144 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0409 00:52:16.654144    2144 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0409 00:52:16.659015    2144 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0409 00:52:16.659015    2144 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.0418378s)
	W0409 00:52:16.659015    2144 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0409 00:52:16.685309    2144 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0409 00:52:16.685428    2144 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0409 00:52:16.685428    2144 start.go:495] detecting cgroup driver to use...
	I0409 00:52:16.685493    2144 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0409 00:52:16.723642    2144 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0409 00:52:16.735504    2144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0409 00:52:16.766459    2144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W0409 00:52:16.769873    2144 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0409 00:52:16.770205    2144 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0409 00:52:16.788750    2144 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0409 00:52:16.801795    2144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0409 00:52:16.836252    2144 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0409 00:52:16.868360    2144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0409 00:52:16.899578    2144 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0409 00:52:16.927921    2144 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0409 00:52:16.958170    2144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0409 00:52:16.986823    2144 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0409 00:52:17.015803    2144 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0409 00:52:17.054279    2144 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0409 00:52:17.073386    2144 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0409 00:52:17.073986    2144 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0409 00:52:17.085465    2144 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0409 00:52:17.120335    2144 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0409 00:52:17.148317    2144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0409 00:52:17.343438    2144 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0409 00:52:17.375885    2144 start.go:495] detecting cgroup driver to use...
	I0409 00:52:17.387879    2144 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0409 00:52:17.411470    2144 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0409 00:52:17.411470    2144 command_runner.go:130] > [Unit]
	I0409 00:52:17.411470    2144 command_runner.go:130] > Description=Docker Application Container Engine
	I0409 00:52:17.411470    2144 command_runner.go:130] > Documentation=https://docs.docker.com
	I0409 00:52:17.411934    2144 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0409 00:52:17.411934    2144 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0409 00:52:17.411934    2144 command_runner.go:130] > StartLimitBurst=3
	I0409 00:52:17.411934    2144 command_runner.go:130] > StartLimitIntervalSec=60
	I0409 00:52:17.411934    2144 command_runner.go:130] > [Service]
	I0409 00:52:17.412020    2144 command_runner.go:130] > Type=notify
	I0409 00:52:17.412020    2144 command_runner.go:130] > Restart=on-failure
	I0409 00:52:17.412020    2144 command_runner.go:130] > Environment=NO_PROXY=192.168.113.157
	I0409 00:52:17.412020    2144 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0409 00:52:17.412020    2144 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0409 00:52:17.412020    2144 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0409 00:52:17.412020    2144 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0409 00:52:17.412020    2144 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0409 00:52:17.412020    2144 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0409 00:52:17.412020    2144 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0409 00:52:17.412020    2144 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0409 00:52:17.412020    2144 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0409 00:52:17.412020    2144 command_runner.go:130] > ExecStart=
	I0409 00:52:17.412020    2144 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0409 00:52:17.412020    2144 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0409 00:52:17.412020    2144 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0409 00:52:17.412020    2144 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0409 00:52:17.412020    2144 command_runner.go:130] > LimitNOFILE=infinity
	I0409 00:52:17.412020    2144 command_runner.go:130] > LimitNPROC=infinity
	I0409 00:52:17.412020    2144 command_runner.go:130] > LimitCORE=infinity
	I0409 00:52:17.412020    2144 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0409 00:52:17.412020    2144 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0409 00:52:17.412020    2144 command_runner.go:130] > TasksMax=infinity
	I0409 00:52:17.412020    2144 command_runner.go:130] > TimeoutStartSec=0
	I0409 00:52:17.412020    2144 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0409 00:52:17.412020    2144 command_runner.go:130] > Delegate=yes
	I0409 00:52:17.412020    2144 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0409 00:52:17.412020    2144 command_runner.go:130] > KillMode=process
	I0409 00:52:17.412020    2144 command_runner.go:130] > [Install]
	I0409 00:52:17.412020    2144 command_runner.go:130] > WantedBy=multi-user.target
	I0409 00:52:17.425198    2144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0409 00:52:17.455716    2144 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0409 00:52:17.503971    2144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0409 00:52:17.537962    2144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0409 00:52:17.572049    2144 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0409 00:52:17.635798    2144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0409 00:52:17.657964    2144 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0409 00:52:17.690772    2144 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0409 00:52:17.702192    2144 ssh_runner.go:195] Run: which cri-dockerd
	I0409 00:52:17.707526    2144 command_runner.go:130] > /usr/bin/cri-dockerd
	I0409 00:52:17.720015    2144 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0409 00:52:17.737722    2144 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0409 00:52:17.777904    2144 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0409 00:52:17.959149    2144 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0409 00:52:18.141221    2144 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0409 00:52:18.141326    2144 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0409 00:52:18.181297    2144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0409 00:52:18.385408    2144 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0409 00:52:20.973261    2144 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5878192s)
	I0409 00:52:20.984776    2144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0409 00:52:21.017885    2144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0409 00:52:21.051763    2144 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0409 00:52:21.248736    2144 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0409 00:52:21.438555    2144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0409 00:52:21.631558    2144 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0409 00:52:21.676585    2144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0409 00:52:21.711864    2144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0409 00:52:21.907781    2144 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0409 00:52:22.009854    2144 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0409 00:52:22.023546    2144 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0409 00:52:22.031052    2144 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0409 00:52:22.031052    2144 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0409 00:52:22.031052    2144 command_runner.go:130] > Device: 0,22	Inode: 879         Links: 1
	I0409 00:52:22.031052    2144 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0409 00:52:22.031052    2144 command_runner.go:130] > Access: 2025-04-09 00:52:21.954531398 +0000
	I0409 00:52:22.031052    2144 command_runner.go:130] > Modify: 2025-04-09 00:52:21.954531398 +0000
	I0409 00:52:22.031052    2144 command_runner.go:130] > Change: 2025-04-09 00:52:21.958531474 +0000
	I0409 00:52:22.031052    2144 command_runner.go:130] >  Birth: -
	I0409 00:52:22.031415    2144 start.go:563] Will wait 60s for crictl version
	I0409 00:52:22.043552    2144 ssh_runner.go:195] Run: which crictl
	I0409 00:52:22.048666    2144 command_runner.go:130] > /usr/bin/crictl
	I0409 00:52:22.060175    2144 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0409 00:52:22.114101    2144 command_runner.go:130] > Version:  0.1.0
	I0409 00:52:22.114229    2144 command_runner.go:130] > RuntimeName:  docker
	I0409 00:52:22.114229    2144 command_runner.go:130] > RuntimeVersion:  27.4.0
	I0409 00:52:22.114229    2144 command_runner.go:130] > RuntimeApiVersion:  v1
	I0409 00:52:22.114229    2144 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0409 00:52:22.123608    2144 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0409 00:52:22.154690    2144 command_runner.go:130] > 27.4.0
	I0409 00:52:22.167918    2144 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0409 00:52:22.199896    2144 command_runner.go:130] > 27.4.0
	I0409 00:52:22.207106    2144 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 27.4.0 ...
	I0409 00:52:22.210484    2144 out.go:177]   - env NO_PROXY=192.168.113.157
	I0409 00:52:22.213237    2144 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0409 00:52:22.216355    2144 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0409 00:52:22.216355    2144 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0409 00:52:22.216355    2144 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0409 00:52:22.216355    2144 ip.go:211] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:f4:da:75 Flags:up|broadcast|multicast|running}
	I0409 00:52:22.219458    2144 ip.go:214] interface addr: fe80::e8ab:9cc6:22b1:a5fc/64
	I0409 00:52:22.219458    2144 ip.go:214] interface addr: 192.168.112.1/20
	I0409 00:52:22.238424    2144 ssh_runner.go:195] Run: grep 192.168.112.1	host.minikube.internal$ /etc/hosts
	I0409 00:52:22.245492    2144 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.112.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0409 00:52:22.269530    2144 mustload.go:65] Loading cluster: multinode-611500
	I0409 00:52:22.270234    2144 config.go:182] Loaded profile config "multinode-611500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0409 00:52:22.271071    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 00:52:24.398649    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:52:24.398649    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:52:24.399132    2144 host.go:66] Checking if "multinode-611500" exists ...
	I0409 00:52:24.399817    2144 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500 for IP: 192.168.113.143
	I0409 00:52:24.399904    2144 certs.go:194] generating shared ca certs ...
	I0409 00:52:24.399984    2144 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 00:52:24.400019    2144 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0409 00:52:24.400752    2144 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0409 00:52:24.400752    2144 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0409 00:52:24.401363    2144 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0409 00:52:24.401488    2144 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0409 00:52:24.401595    2144 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0409 00:52:24.402187    2144 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9864.pem (1338 bytes)
	W0409 00:52:24.402732    2144 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9864_empty.pem, impossibly tiny 0 bytes
	I0409 00:52:24.402875    2144 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0409 00:52:24.403228    2144 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0409 00:52:24.403228    2144 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0409 00:52:24.403929    2144 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0409 00:52:24.403929    2144 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem (1708 bytes)
	I0409 00:52:24.404675    2144 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> /usr/share/ca-certificates/98642.pem
	I0409 00:52:24.404932    2144 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0409 00:52:24.404932    2144 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9864.pem -> /usr/share/ca-certificates/9864.pem
	I0409 00:52:24.404932    2144 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0409 00:52:24.454705    2144 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0409 00:52:24.497958    2144 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0409 00:52:24.545939    2144 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0409 00:52:24.595906    2144 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem --> /usr/share/ca-certificates/98642.pem (1708 bytes)
	I0409 00:52:24.644186    2144 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0409 00:52:24.689552    2144 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9864.pem --> /usr/share/ca-certificates/9864.pem (1338 bytes)
	I0409 00:52:24.748384    2144 ssh_runner.go:195] Run: openssl version
	I0409 00:52:24.759983    2144 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0409 00:52:24.769735    2144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98642.pem && ln -fs /usr/share/ca-certificates/98642.pem /etc/ssl/certs/98642.pem"
	I0409 00:52:24.802610    2144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98642.pem
	I0409 00:52:24.810104    2144 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr  8 23:04 /usr/share/ca-certificates/98642.pem
	I0409 00:52:24.811066    2144 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 23:04 /usr/share/ca-certificates/98642.pem
	I0409 00:52:24.821096    2144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98642.pem
	I0409 00:52:24.830112    2144 command_runner.go:130] > 3ec20f2e
	I0409 00:52:24.842055    2144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/98642.pem /etc/ssl/certs/3ec20f2e.0"
	I0409 00:52:24.870767    2144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0409 00:52:24.900122    2144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0409 00:52:24.906581    2144 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr  8 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0409 00:52:24.906810    2144 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0409 00:52:24.918357    2144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0409 00:52:24.926118    2144 command_runner.go:130] > b5213941
	I0409 00:52:24.936531    2144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0409 00:52:24.965345    2144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9864.pem && ln -fs /usr/share/ca-certificates/9864.pem /etc/ssl/certs/9864.pem"
	I0409 00:52:24.996593    2144 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9864.pem
	I0409 00:52:25.004598    2144 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr  8 23:04 /usr/share/ca-certificates/9864.pem
	I0409 00:52:25.004629    2144 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 23:04 /usr/share/ca-certificates/9864.pem
	I0409 00:52:25.015080    2144 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9864.pem
	I0409 00:52:25.024395    2144 command_runner.go:130] > 51391683
	I0409 00:52:25.036256    2144 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9864.pem /etc/ssl/certs/51391683.0"
	I0409 00:52:25.071776    2144 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0409 00:52:25.079917    2144 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0409 00:52:25.081195    2144 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0409 00:52:25.081492    2144 kubeadm.go:934] updating node {m02 192.168.113.143 8443 v1.32.2 docker false true} ...
	I0409 00:52:25.081615    2144 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-611500-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.113.143
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:multinode-611500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0409 00:52:25.091376    2144 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0409 00:52:25.110202    2144 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
	I0409 00:52:25.110598    2144 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
	
	Initiating transfer...
	I0409 00:52:25.122362    2144 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
	I0409 00:52:25.137960    2144 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256
	I0409 00:52:25.137960    2144 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
	I0409 00:52:25.137960    2144 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256
	I0409 00:52:25.138548    2144 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubectl -> /var/lib/minikube/binaries/v1.32.2/kubectl
	I0409 00:52:25.138548    2144 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubeadm -> /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0409 00:52:25.151427    2144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0409 00:52:25.152564    2144 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl
	I0409 00:52:25.152564    2144 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0409 00:52:25.174208    2144 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubelet -> /var/lib/minikube/binaries/v1.32.2/kubelet
	I0409 00:52:25.174208    2144 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubectl': No such file or directory
	I0409 00:52:25.174328    2144 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubectl': No such file or directory
	I0409 00:52:25.174328    2144 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubeadm': No such file or directory
	I0409 00:52:25.174328    2144 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubectl --> /var/lib/minikube/binaries/v1.32.2/kubectl (57323672 bytes)
	I0409 00:52:25.174328    2144 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubeadm': No such file or directory
	I0409 00:52:25.174328    2144 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubeadm --> /var/lib/minikube/binaries/v1.32.2/kubeadm (70942872 bytes)
	I0409 00:52:25.185601    2144 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet
	I0409 00:52:25.249938    2144 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubelet': No such file or directory
	I0409 00:52:25.249983    2144 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubelet': No such file or directory
	I0409 00:52:25.250192    2144 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubelet --> /var/lib/minikube/binaries/v1.32.2/kubelet (77406468 bytes)
	I0409 00:52:26.522078    2144 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0409 00:52:26.539271    2144 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0409 00:52:26.569991    2144 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0409 00:52:26.616832    2144 ssh_runner.go:195] Run: grep 192.168.113.157	control-plane.minikube.internal$ /etc/hosts
	I0409 00:52:26.623380    2144 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.113.157	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0409 00:52:26.654029    2144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0409 00:52:26.869050    2144 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0409 00:52:26.896857    2144 host.go:66] Checking if "multinode-611500" exists ...
	I0409 00:52:26.897438    2144 start.go:317] joinCluster: &{Name:multinode-611500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:multinode-611500
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.113.157 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.113.143 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0409 00:52:26.897438    2144 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0409 00:52:26.897438    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 00:52:29.073341    2144 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 00:52:29.073341    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:52:29.073483    2144 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 00:52:31.615328    2144 main.go:141] libmachine: [stdout =====>] : 192.168.113.157
	
	I0409 00:52:31.615471    2144 main.go:141] libmachine: [stderr =====>] : 
	I0409 00:52:31.616346    2144 sshutil.go:53] new ssh client: &{IP:192.168.113.157 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-611500\id_rsa Username:docker}
	I0409 00:52:32.043540    2144 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token h849ji.5mgjtm4upvg6xep5 --discovery-token-ca-cert-hash sha256:aa5a4dda055a1a4ae6c54f5bc7c6626b2903d2da5858116de66a68e5e1fbf334 
	I0409 00:52:32.043614    2144 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm token create --print-join-command --ttl=0": (5.1461082s)
	I0409 00:52:32.043705    2144 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.113.143 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0409 00:52:32.043853    2144 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token h849ji.5mgjtm4upvg6xep5 --discovery-token-ca-cert-hash sha256:aa5a4dda055a1a4ae6c54f5bc7c6626b2903d2da5858116de66a68e5e1fbf334 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-611500-m02"
	I0409 00:52:32.222439    2144 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0409 00:52:33.519969    2144 command_runner.go:130] > [preflight] Running pre-flight checks
	I0409 00:52:33.519969    2144 command_runner.go:130] > [preflight] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
	I0409 00:52:33.519969    2144 command_runner.go:130] > [preflight] Use 'kubeadm init phase upload-config --config your-config.yaml' to re-upload it.
	I0409 00:52:33.519969    2144 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0409 00:52:33.519969    2144 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0409 00:52:33.519969    2144 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0409 00:52:33.519969    2144 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0409 00:52:33.519969    2144 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 506.973254ms
	I0409 00:52:33.519969    2144 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0409 00:52:33.519969    2144 command_runner.go:130] > This node has joined the cluster:
	I0409 00:52:33.519969    2144 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0409 00:52:33.519969    2144 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0409 00:52:33.519969    2144 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0409 00:52:33.519969    2144 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token h849ji.5mgjtm4upvg6xep5 --discovery-token-ca-cert-hash sha256:aa5a4dda055a1a4ae6c54f5bc7c6626b2903d2da5858116de66a68e5e1fbf334 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-611500-m02": (1.4760969s)
	I0409 00:52:33.519969    2144 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0409 00:52:33.718755    2144 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0409 00:52:33.909586    2144 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-611500-m02 minikube.k8s.io/updated_at=2025_04_09T00_52_33_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=fd2f4c3eba2bd452b5997c855e28d0966165ba83 minikube.k8s.io/name=multinode-611500 minikube.k8s.io/primary=false
	I0409 00:52:34.039321    2144 command_runner.go:130] > node/multinode-611500-m02 labeled
	I0409 00:52:34.042516    2144 start.go:319] duration metric: took 7.1449838s to joinCluster
	I0409 00:52:34.042741    2144 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.113.143 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0409 00:52:34.043458    2144 config.go:182] Loaded profile config "multinode-611500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0409 00:52:34.045997    2144 out.go:177] * Verifying Kubernetes components...
	I0409 00:52:34.060684    2144 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0409 00:52:34.249571    2144 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0409 00:52:34.278103    2144 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0409 00:52:34.278793    2144 kapi.go:59] client config for multinode-611500: &rest.Config{Host:"https://192.168.113.157:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-611500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-611500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2809400), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0409 00:52:34.279675    2144 node_ready.go:35] waiting up to 6m0s for node "multinode-611500-m02" to be "Ready" ...
	I0409 00:52:34.279675    2144 type.go:168] "Request Body" body=""
	I0409 00:52:34.279675    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:34.279675    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:34.279675    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:34.279675    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:34.292837    2144 round_trippers.go:581] Response Status: 200 OK in 13 milliseconds
	I0409 00:52:34.292837    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:34.292837    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:34.292837    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:34.292919    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:34.292919    2144 round_trippers.go:587]     Content-Length: 2721
	I0409 00:52:34.292919    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:34 GMT
	I0409 00:52:34.292919    2144 round_trippers.go:587]     Audit-Id: 6284c9f5-186c-47d9-bb16-7e41da1a064f
	I0409 00:52:34.292919    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:34.293064    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 8a 15 0a 9c 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 30 32 38 00 42  |bd39faf32.6028.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12470 chars]
	 >
	I0409 00:52:34.780157    2144 type.go:168] "Request Body" body=""
	I0409 00:52:34.780157    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:34.780157    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:34.780157    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:34.780157    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:34.784934    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:52:34.784934    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:34.785065    2144 round_trippers.go:587]     Audit-Id: 0d7aa5d0-5fcb-4518-8458-8468ff54dcd4
	I0409 00:52:34.785065    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:34.785065    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:34.785065    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:34.785065    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:34.785065    2144 round_trippers.go:587]     Content-Length: 2721
	I0409 00:52:34.785065    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:34 GMT
	I0409 00:52:34.785140    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 8a 15 0a 9c 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 30 32 38 00 42  |bd39faf32.6028.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12470 chars]
	 >
	I0409 00:52:35.280200    2144 type.go:168] "Request Body" body=""
	I0409 00:52:35.280200    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:35.280200    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:35.280200    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:35.280200    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:35.284692    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:52:35.284789    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:35.284789    2144 round_trippers.go:587]     Audit-Id: cdda7fae-6faa-4768-b2b4-3836d53d1061
	I0409 00:52:35.284789    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:35.284789    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:35.284789    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:35.284789    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:35.284789    2144 round_trippers.go:587]     Content-Length: 2791
	I0409 00:52:35.284789    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:35 GMT
	I0409 00:52:35.284789    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d0 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 30 38 38 00 42  |bd39faf32.6088.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12792 chars]
	 >
	I0409 00:52:35.781120    2144 type.go:168] "Request Body" body=""
	I0409 00:52:35.781120    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:35.781120    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:35.781120    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:35.781120    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:35.785061    2144 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 00:52:35.785144    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:35.785144    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:35.785144    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:35.785144    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:35.785144    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:35.785217    2144 round_trippers.go:587]     Content-Length: 2791
	I0409 00:52:35.785217    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:35 GMT
	I0409 00:52:35.785252    2144 round_trippers.go:587]     Audit-Id: 09076b02-7635-4a18-9247-61ec80b23144
	I0409 00:52:35.785252    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d0 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 30 38 38 00 42  |bd39faf32.6088.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12792 chars]
	 >
	I0409 00:52:36.280180    2144 type.go:168] "Request Body" body=""
	I0409 00:52:36.280180    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:36.280180    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:36.280180    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:36.280180    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:36.284959    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:52:36.284959    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:36.284959    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:36.284959    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:36.284959    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:36.285121    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:36.285121    2144 round_trippers.go:587]     Content-Length: 2791
	I0409 00:52:36.285121    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:36 GMT
	I0409 00:52:36.285121    2144 round_trippers.go:587]     Audit-Id: db560072-0095-40ca-8407-04f3fe735490
	I0409 00:52:36.285340    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d0 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 30 38 38 00 42  |bd39faf32.6088.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12792 chars]
	 >
	I0409 00:52:36.285595    2144 node_ready.go:53] node "multinode-611500-m02" has status "Ready":"False"
	I0409 00:52:36.780401    2144 type.go:168] "Request Body" body=""
	I0409 00:52:36.780947    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:36.780947    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:36.780947    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:36.780947    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:36.784371    2144 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 00:52:36.784371    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:36.784986    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:36 GMT
	I0409 00:52:36.785029    2144 round_trippers.go:587]     Audit-Id: f1abbce5-b132-4d97-829a-3eb0d14339f0
	I0409 00:52:36.785029    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:36.785075    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:36.785075    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:36.785075    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:36.785075    2144 round_trippers.go:587]     Content-Length: 2791
	I0409 00:52:36.785367    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d0 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 30 38 38 00 42  |bd39faf32.6088.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12792 chars]
	 >
	I0409 00:52:37.281261    2144 type.go:168] "Request Body" body=""
	I0409 00:52:37.281388    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:37.281388    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:37.281388    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:37.281388    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:37.286398    2144 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0409 00:52:37.286495    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:37.286495    2144 round_trippers.go:587]     Content-Length: 2791
	I0409 00:52:37.286495    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:37 GMT
	I0409 00:52:37.286495    2144 round_trippers.go:587]     Audit-Id: 3210ba47-1e62-46aa-b4ce-dbb8c81bd01d
	I0409 00:52:37.286495    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:37.286495    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:37.286495    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:37.286495    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:37.286624    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d0 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 30 38 38 00 42  |bd39faf32.6088.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12792 chars]
	 >
	I0409 00:52:37.780060    2144 type.go:168] "Request Body" body=""
	I0409 00:52:37.780060    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:37.780060    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:37.780060    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:37.780060    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:37.785442    2144 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0409 00:52:37.785530    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:37.785700    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:37.785700    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:37.785700    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:37.785743    2144 round_trippers.go:587]     Content-Length: 2791
	I0409 00:52:37.785743    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:37 GMT
	I0409 00:52:37.785743    2144 round_trippers.go:587]     Audit-Id: a5998c36-814a-41d2-90bf-8c93b8538ff8
	I0409 00:52:37.785743    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:37.785949    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d0 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 30 38 38 00 42  |bd39faf32.6088.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12792 chars]
	 >
	I0409 00:52:38.280560    2144 type.go:168] "Request Body" body=""
	I0409 00:52:38.280680    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:38.280797    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:38.280926    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:38.280979    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:38.287560    2144 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0409 00:52:38.287560    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:38.287560    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:38.287560    2144 round_trippers.go:587]     Content-Length: 2791
	I0409 00:52:38.287560    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:38 GMT
	I0409 00:52:38.287560    2144 round_trippers.go:587]     Audit-Id: f4a400bf-84c8-4366-b477-54e48d0009e5
	I0409 00:52:38.287560    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:38.287560    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:38.287560    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:38.287560    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d0 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 30 38 38 00 42  |bd39faf32.6088.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12792 chars]
	 >
	I0409 00:52:38.288208    2144 node_ready.go:53] node "multinode-611500-m02" has status "Ready":"False"
	I0409 00:52:38.780564    2144 type.go:168] "Request Body" body=""
	I0409 00:52:38.780619    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:38.780619    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:38.780619    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:38.780619    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:38.784382    2144 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 00:52:38.784419    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:38.784419    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:38.784419    2144 round_trippers.go:587]     Content-Length: 2791
	I0409 00:52:38.784419    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:38 GMT
	I0409 00:52:38.784419    2144 round_trippers.go:587]     Audit-Id: e270e18a-83b7-46b2-b9bb-2120f5005548
	I0409 00:52:38.784419    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:38.784419    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:38.784419    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:38.784723    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d0 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 30 38 38 00 42  |bd39faf32.6088.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12792 chars]
	 >
	I0409 00:52:39.281084    2144 type.go:168] "Request Body" body=""
	I0409 00:52:39.281084    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:39.281084    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:39.281084    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:39.281084    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:39.284595    2144 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 00:52:39.284595    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:39.284595    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:39 GMT
	I0409 00:52:39.284595    2144 round_trippers.go:587]     Audit-Id: 136ed06b-3933-4eb5-af61-2bddbc22dac7
	I0409 00:52:39.284595    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:39.284595    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:39.284595    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:39.284595    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:39.284595    2144 round_trippers.go:587]     Content-Length: 2791
	I0409 00:52:39.284595    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d0 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 30 38 38 00 42  |bd39faf32.6088.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12792 chars]
	 >
	I0409 00:52:39.780181    2144 type.go:168] "Request Body" body=""
	I0409 00:52:39.780181    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:39.780181    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:39.780181    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:39.780181    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:39.784749    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:52:39.784749    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:39.784749    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:39.784749    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:39.784876    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:39.784933    2144 round_trippers.go:587]     Content-Length: 2791
	I0409 00:52:39.784958    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:39 GMT
	I0409 00:52:39.784958    2144 round_trippers.go:587]     Audit-Id: 8f3769e4-40ed-4784-8c39-2f526457eab1
	I0409 00:52:39.784958    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:39.785128    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d0 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 30 38 38 00 42  |bd39faf32.6088.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12792 chars]
	 >
	I0409 00:52:40.280528    2144 type.go:168] "Request Body" body=""
	I0409 00:52:40.280528    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:40.280528    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:40.280528    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:40.280528    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:40.286727    2144 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0409 00:52:40.286835    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:40.286835    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:40.286835    2144 round_trippers.go:587]     Content-Length: 2791
	I0409 00:52:40.286835    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:40 GMT
	I0409 00:52:40.286835    2144 round_trippers.go:587]     Audit-Id: 9ff2f1e3-8856-46ce-b960-5c226b1054e6
	I0409 00:52:40.286897    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:40.286897    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:40.286897    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:40.286897    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d0 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 30 38 38 00 42  |bd39faf32.6088.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12792 chars]
	 >
	I0409 00:52:40.780305    2144 type.go:168] "Request Body" body=""
	I0409 00:52:40.780305    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:40.780305    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:40.780305    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:40.780305    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:40.785522    2144 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0409 00:52:40.785615    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:40.785615    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:40.785615    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:40.785615    2144 round_trippers.go:587]     Content-Length: 2791
	I0409 00:52:40.785615    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:40 GMT
	I0409 00:52:40.785615    2144 round_trippers.go:587]     Audit-Id: afc36e88-0b56-4aa8-aec7-c637ec8bf84f
	I0409 00:52:40.785615    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:40.785615    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:40.785685    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d0 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 30 38 38 00 42  |bd39faf32.6088.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12792 chars]
	 >
	I0409 00:52:40.785685    2144 node_ready.go:53] node "multinode-611500-m02" has status "Ready":"False"
	I0409 00:52:41.280437    2144 type.go:168] "Request Body" body=""
	I0409 00:52:41.280437    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:41.280437    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:41.280437    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:41.280437    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:41.285335    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:52:41.285335    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:41.285335    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:41.285335    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:41.285335    2144 round_trippers.go:587]     Content-Length: 2791
	I0409 00:52:41.285450    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:41 GMT
	I0409 00:52:41.285450    2144 round_trippers.go:587]     Audit-Id: f2dbb7d7-e11b-4e99-9df4-8fab2fdfb5c8
	I0409 00:52:41.285450    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:41.285450    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:41.285661    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d0 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 30 38 38 00 42  |bd39faf32.6088.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12792 chars]
	 >
	I0409 00:52:41.779926    2144 type.go:168] "Request Body" body=""
	I0409 00:52:41.779926    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:41.779926    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:41.779926    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:41.779926    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:41.784077    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:52:41.784077    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:41.784077    2144 round_trippers.go:587]     Content-Length: 2791
	I0409 00:52:41.784077    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:41 GMT
	I0409 00:52:41.784077    2144 round_trippers.go:587]     Audit-Id: dc5a360f-9753-4634-82af-32cdd462baab
	I0409 00:52:41.784077    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:41.784077    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:41.784077    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:41.784077    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:41.784306    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d0 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 30 38 38 00 42  |bd39faf32.6088.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12792 chars]
	 >
	I0409 00:52:42.280321    2144 type.go:168] "Request Body" body=""
	I0409 00:52:42.280321    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:42.280321    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:42.280321    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:42.280321    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:42.283663    2144 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 00:52:42.283730    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:42.283730    2144 round_trippers.go:587]     Audit-Id: ab53e106-dd08-40d7-9457-aeaac86ada45
	I0409 00:52:42.283730    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:42.283730    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:42.283730    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:42.283730    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:42.283730    2144 round_trippers.go:587]     Content-Length: 2791
	I0409 00:52:42.283730    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:42 GMT
	I0409 00:52:42.283730    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d0 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 30 38 38 00 42  |bd39faf32.6088.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12792 chars]
	 >
	I0409 00:52:42.780046    2144 type.go:168] "Request Body" body=""
	I0409 00:52:42.780046    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:42.780046    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:42.780046    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:42.780046    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:42.784208    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:52:42.784208    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:42.784208    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:42.784208    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:42.784208    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:42.784208    2144 round_trippers.go:587]     Content-Length: 2791
	I0409 00:52:42.784208    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:42 GMT
	I0409 00:52:42.784208    2144 round_trippers.go:587]     Audit-Id: cdf6c572-5bd8-4db1-a04f-830bdef8a509
	I0409 00:52:42.784208    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:42.784208    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d0 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 30 38 38 00 42  |bd39faf32.6088.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12792 chars]
	 >
	I0409 00:52:43.280247    2144 type.go:168] "Request Body" body=""
	I0409 00:52:43.280247    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:43.280247    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:43.280247    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:43.280247    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:43.368524    2144 round_trippers.go:581] Response Status: 200 OK in 88 milliseconds
	I0409 00:52:43.368524    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:43.368524    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:43 GMT
	I0409 00:52:43.369540    2144 round_trippers.go:587]     Audit-Id: 7ced5cd8-b6ef-47b0-b447-369e4dd9c26d
	I0409 00:52:43.369540    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:43.369540    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:43.369540    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:43.369540    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:43.369540    2144 round_trippers.go:587]     Content-Length: 3092
	I0409 00:52:43.369540    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fd 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 31 37 38 00 42  |bd39faf32.6178.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14290 chars]
	 >
	I0409 00:52:43.369540    2144 node_ready.go:53] node "multinode-611500-m02" has status "Ready":"False"
	I0409 00:52:43.780461    2144 type.go:168] "Request Body" body=""
	I0409 00:52:43.781042    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:43.781042    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:43.781042    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:43.781042    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:44.018292    2144 round_trippers.go:581] Response Status: 200 OK in 237 milliseconds
	I0409 00:52:44.018417    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:44.018417    2144 round_trippers.go:587]     Audit-Id: c4674c83-b28f-4df1-b61a-68b763c04065
	I0409 00:52:44.018417    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:44.018417    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:44.018511    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:44.018511    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:44.018564    2144 round_trippers.go:587]     Content-Length: 3092
	I0409 00:52:44.018564    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:44 GMT
	I0409 00:52:44.018666    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fd 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 31 37 38 00 42  |bd39faf32.6178.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14290 chars]
	 >
	I0409 00:52:44.281104    2144 type.go:168] "Request Body" body=""
	I0409 00:52:44.281104    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:44.281104    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:44.281104    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:44.281104    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:44.285000    2144 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 00:52:44.285151    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:44.285151    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:44.285151    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:44.285151    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:44.285151    2144 round_trippers.go:587]     Content-Length: 3092
	I0409 00:52:44.285151    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:44 GMT
	I0409 00:52:44.285151    2144 round_trippers.go:587]     Audit-Id: faf2cc27-3c58-4930-b15f-c0f4b49c8a51
	I0409 00:52:44.285151    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:44.285151    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fd 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 31 37 38 00 42  |bd39faf32.6178.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14290 chars]
	 >
	I0409 00:52:44.780913    2144 type.go:168] "Request Body" body=""
	I0409 00:52:44.780913    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:44.780913    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:44.780913    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:44.780913    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:44.786637    2144 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0409 00:52:44.786637    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:44.786637    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:44.786637    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:44.786637    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:44.786637    2144 round_trippers.go:587]     Content-Length: 3092
	I0409 00:52:44.786637    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:44 GMT
	I0409 00:52:44.786637    2144 round_trippers.go:587]     Audit-Id: 0428e390-2583-4f6a-a8b7-841b8b97dead
	I0409 00:52:44.786637    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:44.786637    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fd 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 31 37 38 00 42  |bd39faf32.6178.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14290 chars]
	 >
	I0409 00:52:45.281687    2144 type.go:168] "Request Body" body=""
	I0409 00:52:45.281763    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:45.281855    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:45.281855    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:45.281855    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:45.285178    2144 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 00:52:45.285178    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:45.285178    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:45.285178    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:45.285295    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:45.285295    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:45.285295    2144 round_trippers.go:587]     Content-Length: 3092
	I0409 00:52:45.285295    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:45 GMT
	I0409 00:52:45.285295    2144 round_trippers.go:587]     Audit-Id: 2ad15e0c-1a1f-4b73-adf8-ada2b95b6510
	I0409 00:52:45.285595    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fd 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 31 37 38 00 42  |bd39faf32.6178.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14290 chars]
	 >
	I0409 00:52:45.781310    2144 type.go:168] "Request Body" body=""
	I0409 00:52:45.781310    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:45.781310    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:45.781310    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:45.781310    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:45.785375    2144 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 00:52:45.785375    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:45.785375    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:45 GMT
	I0409 00:52:45.785375    2144 round_trippers.go:587]     Audit-Id: 099d37ef-db1c-44e8-b62f-79a825979e43
	I0409 00:52:45.785375    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:45.785375    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:45.785375    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:45.785375    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:45.785375    2144 round_trippers.go:587]     Content-Length: 3092
	I0409 00:52:45.785375    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fd 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 31 37 38 00 42  |bd39faf32.6178.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14290 chars]
	 >
	I0409 00:52:45.785821    2144 node_ready.go:53] node "multinode-611500-m02" has status "Ready":"False"
	I0409 00:52:46.281218    2144 type.go:168] "Request Body" body=""
	I0409 00:52:46.281218    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:46.281218    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:46.281218    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:46.281218    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:46.285568    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:52:46.285608    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:46.285608    2144 round_trippers.go:587]     Content-Length: 3092
	I0409 00:52:46.285608    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:46 GMT
	I0409 00:52:46.285659    2144 round_trippers.go:587]     Audit-Id: 4a6a4afc-3f46-4e96-94e4-155961c28d84
	I0409 00:52:46.285659    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:46.285659    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:46.285659    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:46.285659    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:46.285803    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fd 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 31 37 38 00 42  |bd39faf32.6178.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14290 chars]
	 >
	I0409 00:52:46.780376    2144 type.go:168] "Request Body" body=""
	I0409 00:52:46.780376    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:46.780376    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:46.780376    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:46.780376    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:46.784935    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:52:46.784935    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:46.784935    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:46.784935    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:46.784935    2144 round_trippers.go:587]     Content-Length: 3092
	I0409 00:52:46.784935    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:46 GMT
	I0409 00:52:46.784935    2144 round_trippers.go:587]     Audit-Id: 03e005f2-bfd3-4760-bb8f-6fd47582d8d2
	I0409 00:52:46.784935    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:46.784935    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:46.784935    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fd 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 31 37 38 00 42  |bd39faf32.6178.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14290 chars]
	 >
	I0409 00:52:47.280420    2144 type.go:168] "Request Body" body=""
	I0409 00:52:47.280420    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:47.280420    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:47.280420    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:47.280420    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:47.284876    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:52:47.284876    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:47.284876    2144 round_trippers.go:587]     Audit-Id: 4c6eb0c8-56eb-438a-8334-30b674a1a022
	I0409 00:52:47.284876    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:47.284876    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:47.284876    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:47.284876    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:47.284876    2144 round_trippers.go:587]     Content-Length: 3092
	I0409 00:52:47.284876    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:47 GMT
	I0409 00:52:47.284876    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fd 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 31 37 38 00 42  |bd39faf32.6178.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14290 chars]
	 >
	I0409 00:52:47.780785    2144 type.go:168] "Request Body" body=""
	I0409 00:52:47.780785    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:47.780785    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:47.780785    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:47.780785    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:47.786098    2144 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0409 00:52:47.786174    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:47.786174    2144 round_trippers.go:587]     Audit-Id: 21f1ffa5-0d94-4732-998f-290788a6f049
	I0409 00:52:47.786174    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:47.786174    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:47.786174    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:47.786174    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:47.786174    2144 round_trippers.go:587]     Content-Length: 3092
	I0409 00:52:47.786174    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:47 GMT
	I0409 00:52:47.786427    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fd 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 31 37 38 00 42  |bd39faf32.6178.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14290 chars]
	 >
	I0409 00:52:47.786713    2144 node_ready.go:53] node "multinode-611500-m02" has status "Ready":"False"
	I0409 00:52:48.280673    2144 type.go:168] "Request Body" body=""
	I0409 00:52:48.280673    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:48.280673    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:48.280673    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:48.280673    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:48.285079    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:52:48.285147    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:48.285147    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:48.285147    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:48.285147    2144 round_trippers.go:587]     Content-Length: 3092
	I0409 00:52:48.285147    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:48 GMT
	I0409 00:52:48.285147    2144 round_trippers.go:587]     Audit-Id: d106916c-c8d0-4a17-998c-50c51ba0602f
	I0409 00:52:48.285147    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:48.285147    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:48.285430    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fd 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 31 37 38 00 42  |bd39faf32.6178.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14290 chars]
	 >
	I0409 00:52:48.780062    2144 type.go:168] "Request Body" body=""
	I0409 00:52:48.780062    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:48.780062    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:48.780062    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:48.780062    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:48.788929    2144 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0409 00:52:48.789001    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:48.789001    2144 round_trippers.go:587]     Content-Length: 3092
	I0409 00:52:48.789001    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:48 GMT
	I0409 00:52:48.789001    2144 round_trippers.go:587]     Audit-Id: 443fb3e0-0139-4f79-856e-06f1dabc6a56
	I0409 00:52:48.789071    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:48.789071    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:48.789071    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:48.789071    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:48.789376    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fd 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 31 37 38 00 42  |bd39faf32.6178.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14290 chars]
	 >
	I0409 00:52:49.280862    2144 type.go:168] "Request Body" body=""
	I0409 00:52:49.281094    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:49.281094    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:49.281151    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:49.281151    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:49.285112    2144 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 00:52:49.285112    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:49.285112    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:49.285112    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:49.285112    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:49.285112    2144 round_trippers.go:587]     Content-Length: 3092
	I0409 00:52:49.285112    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:49 GMT
	I0409 00:52:49.285112    2144 round_trippers.go:587]     Audit-Id: 5d7309b8-d167-45b7-add1-61aa79e9d743
	I0409 00:52:49.285112    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:49.285336    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fd 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 31 37 38 00 42  |bd39faf32.6178.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14290 chars]
	 >
	I0409 00:52:49.780501    2144 type.go:168] "Request Body" body=""
	I0409 00:52:49.781128    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:49.781128    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:49.781128    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:49.781128    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:50.071932    2144 round_trippers.go:581] Response Status: 200 OK in 290 milliseconds
	I0409 00:52:50.071932    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:50.071932    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:50.071932    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:50.071932    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:50.071932    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:50.071932    2144 round_trippers.go:587]     Content-Length: 3092
	I0409 00:52:50.071932    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:50 GMT
	I0409 00:52:50.071932    2144 round_trippers.go:587]     Audit-Id: e37c7a66-f847-419a-bbee-28396334513d
	I0409 00:52:50.071932    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fd 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 31 37 38 00 42  |bd39faf32.6178.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14290 chars]
	 >
	I0409 00:52:50.072479    2144 node_ready.go:53] node "multinode-611500-m02" has status "Ready":"False"
	I0409 00:52:50.280142    2144 type.go:168] "Request Body" body=""
	I0409 00:52:50.280142    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:50.280142    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:50.280142    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:50.280142    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:50.285308    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:52:50.285308    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:50.285308    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:50 GMT
	I0409 00:52:50.285308    2144 round_trippers.go:587]     Audit-Id: f9e36750-3dad-42ad-9268-41e540a7b287
	I0409 00:52:50.285482    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:50.285482    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:50.285508    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:50.285508    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:50.285508    2144 round_trippers.go:587]     Content-Length: 3092
	I0409 00:52:50.285708    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fd 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 31 37 38 00 42  |bd39faf32.6178.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14290 chars]
	 >
	I0409 00:52:50.781025    2144 type.go:168] "Request Body" body=""
	I0409 00:52:50.781430    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:50.781430    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:50.781430    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:50.781430    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:50.785466    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:52:50.785552    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:50.785552    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:50.785552    2144 round_trippers.go:587]     Content-Length: 3092
	I0409 00:52:50.785552    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:50 GMT
	I0409 00:52:50.785614    2144 round_trippers.go:587]     Audit-Id: 83f9b698-ac25-4fa0-827c-4f1666a36fa4
	I0409 00:52:50.785639    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:50.785683    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:50.785683    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:50.785842    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fd 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 31 37 38 00 42  |bd39faf32.6178.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14290 chars]
	 >
	I0409 00:52:51.280748    2144 type.go:168] "Request Body" body=""
	I0409 00:52:51.280748    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:51.280748    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:51.280748    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:51.280748    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:51.286574    2144 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0409 00:52:51.286574    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:51.286574    2144 round_trippers.go:587]     Audit-Id: a211ec6f-ca12-46fa-a111-cdac4b0cf110
	I0409 00:52:51.286574    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:51.286574    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:51.286574    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:51.286688    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:51.286743    2144 round_trippers.go:587]     Content-Length: 3092
	I0409 00:52:51.286743    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:51 GMT
	I0409 00:52:51.287042    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fd 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 31 37 38 00 42  |bd39faf32.6178.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14290 chars]
	 >
	I0409 00:52:51.780911    2144 type.go:168] "Request Body" body=""
	I0409 00:52:51.780911    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:51.780911    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:51.780911    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:51.780911    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:51.786056    2144 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0409 00:52:51.786150    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:51.786150    2144 round_trippers.go:587]     Audit-Id: ac328745-8664-4446-8086-2f4a73d3fab9
	I0409 00:52:51.786150    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:51.786150    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:51.786150    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:51.786245    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:51.786245    2144 round_trippers.go:587]     Content-Length: 3092
	I0409 00:52:51.786245    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:51 GMT
	I0409 00:52:51.786788    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fd 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 31 37 38 00 42  |bd39faf32.6178.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14290 chars]
	 >
	I0409 00:52:52.280187    2144 type.go:168] "Request Body" body=""
	I0409 00:52:52.280187    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:52.280187    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:52.280187    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:52.280187    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:52.285158    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:52:52.285158    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:52.285303    2144 round_trippers.go:587]     Audit-Id: c5dc8743-10a4-4a12-875c-0e7ff85d9702
	I0409 00:52:52.285303    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:52.285303    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:52.285303    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:52.285303    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:52.285303    2144 round_trippers.go:587]     Content-Length: 3092
	I0409 00:52:52.285303    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:52 GMT
	I0409 00:52:52.285686    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fd 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 31 37 38 00 42  |bd39faf32.6178.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14290 chars]
	 >
	I0409 00:52:52.285913    2144 node_ready.go:53] node "multinode-611500-m02" has status "Ready":"False"
	I0409 00:52:52.780842    2144 type.go:168] "Request Body" body=""
	I0409 00:52:52.780842    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:52.780842    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:52.780842    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:52.780842    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:52.785432    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:52:52.785432    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:52.785488    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:52.785488    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:52.785488    2144 round_trippers.go:587]     Content-Length: 3092
	I0409 00:52:52.785488    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:52 GMT
	I0409 00:52:52.785488    2144 round_trippers.go:587]     Audit-Id: 051f5c6d-c422-4584-9414-024495947368
	I0409 00:52:52.785488    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:52.785488    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:52.785566    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fd 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 31 37 38 00 42  |bd39faf32.6178.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14290 chars]
	 >
	I0409 00:52:53.280139    2144 type.go:168] "Request Body" body=""
	I0409 00:52:53.280139    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:53.280139    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:53.280139    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:53.280139    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:53.284592    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:52:53.284712    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:53.284862    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:53.284862    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:53.284862    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:53.284862    2144 round_trippers.go:587]     Content-Length: 3092
	I0409 00:52:53.284862    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:53 GMT
	I0409 00:52:53.284862    2144 round_trippers.go:587]     Audit-Id: 4f28efc1-e15a-4c47-be44-f18a8e68838f
	I0409 00:52:53.284862    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:53.284862    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fd 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 31 37 38 00 42  |bd39faf32.6178.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14290 chars]
	 >
	I0409 00:52:53.780537    2144 type.go:168] "Request Body" body=""
	I0409 00:52:53.780537    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:53.780537    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:53.780537    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:53.780537    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:53.785725    2144 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0409 00:52:53.785725    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:53.785725    2144 round_trippers.go:587]     Audit-Id: 155b04c6-f357-45b8-ae66-75862eea66e5
	I0409 00:52:53.786315    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:53.786315    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:53.786315    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:53.786315    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:53.786315    2144 round_trippers.go:587]     Content-Length: 3092
	I0409 00:52:53.786315    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:53 GMT
	I0409 00:52:53.786569    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fd 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 31 37 38 00 42  |bd39faf32.6178.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14290 chars]
	 >
	I0409 00:52:54.281471    2144 type.go:168] "Request Body" body=""
	I0409 00:52:54.281471    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:54.281471    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:54.281471    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:54.281471    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:54.286348    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:52:54.286432    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:54.286432    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:54.286432    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:54.286432    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:54.286432    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:54.286432    2144 round_trippers.go:587]     Content-Length: 3092
	I0409 00:52:54.286432    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:54 GMT
	I0409 00:52:54.286510    2144 round_trippers.go:587]     Audit-Id: aec89b13-104a-4f22-acaf-5bbd79593022
	I0409 00:52:54.286770    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fd 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 31 37 38 00 42  |bd39faf32.6178.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14290 chars]
	 >
	I0409 00:52:54.286861    2144 node_ready.go:53] node "multinode-611500-m02" has status "Ready":"False"
	I0409 00:52:54.780739    2144 type.go:168] "Request Body" body=""
	I0409 00:52:54.780926    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:54.780926    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:54.780926    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:54.780997    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:54.785392    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:52:54.785392    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:54.785392    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:54.785392    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:54.785392    2144 round_trippers.go:587]     Content-Length: 3092
	I0409 00:52:54.785392    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:54 GMT
	I0409 00:52:54.785392    2144 round_trippers.go:587]     Audit-Id: d306443d-05da-4b34-8651-cb4e65ba5add
	I0409 00:52:54.785392    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:54.785392    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:54.785392    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fd 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 31 37 38 00 42  |bd39faf32.6178.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14290 chars]
	 >
	I0409 00:52:55.280163    2144 type.go:168] "Request Body" body=""
	I0409 00:52:55.280163    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:55.280163    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:55.280163    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:55.280163    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:55.284634    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:52:55.284728    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:55.284728    2144 round_trippers.go:587]     Audit-Id: 0caa2a8f-8b1d-40c8-9a59-e69aa52c03f3
	I0409 00:52:55.284728    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:55.284728    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:55.284728    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:55.284728    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:55.284728    2144 round_trippers.go:587]     Content-Length: 3092
	I0409 00:52:55.284728    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:55 GMT
	I0409 00:52:55.284894    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fd 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 31 37 38 00 42  |bd39faf32.6178.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14290 chars]
	 >
	I0409 00:52:55.780780    2144 type.go:168] "Request Body" body=""
	I0409 00:52:55.780926    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:55.780926    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:55.781038    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:55.781038    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:55.787704    2144 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0409 00:52:55.787704    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:55.787809    2144 round_trippers.go:587]     Audit-Id: 1b6b62fe-f81e-412f-9a18-349e74d415a9
	I0409 00:52:55.787809    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:55.787809    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:55.787809    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:55.787809    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:55.787809    2144 round_trippers.go:587]     Content-Length: 3092
	I0409 00:52:55.787809    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:55 GMT
	I0409 00:52:55.787873    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fd 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 31 37 38 00 42  |bd39faf32.6178.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14290 chars]
	 >
	I0409 00:52:56.280504    2144 type.go:168] "Request Body" body=""
	I0409 00:52:56.280504    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:56.280504    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:56.280504    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:56.280504    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:56.285099    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:52:56.285099    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:56.285099    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:56.285099    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:56.285099    2144 round_trippers.go:587]     Content-Length: 3092
	I0409 00:52:56.285099    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:56 GMT
	I0409 00:52:56.285099    2144 round_trippers.go:587]     Audit-Id: ecb1dd9a-af53-4cfd-b8db-85c8d7ff4067
	I0409 00:52:56.285099    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:56.285099    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:56.285441    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fd 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 31 37 38 00 42  |bd39faf32.6178.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14290 chars]
	 >
	I0409 00:52:56.780366    2144 type.go:168] "Request Body" body=""
	I0409 00:52:56.780366    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:56.780366    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:56.780366    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:56.780366    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:56.784511    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:52:56.784597    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:56.784597    2144 round_trippers.go:587]     Audit-Id: e4aca2e6-b307-41b4-9f44-a8856cfaccdf
	I0409 00:52:56.784649    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:56.784649    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:56.784649    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:56.784649    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:56.784649    2144 round_trippers.go:587]     Content-Length: 3092
	I0409 00:52:56.784649    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:56 GMT
	I0409 00:52:56.784860    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fd 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 31 37 38 00 42  |bd39faf32.6178.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14290 chars]
	 >
	I0409 00:52:56.785018    2144 node_ready.go:53] node "multinode-611500-m02" has status "Ready":"False"
	I0409 00:52:57.280937    2144 type.go:168] "Request Body" body=""
	I0409 00:52:57.280937    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:57.280937    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:57.280937    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:57.280937    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:57.285188    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:52:57.285495    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:57.285495    2144 round_trippers.go:587]     Content-Length: 3092
	I0409 00:52:57.285495    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:57 GMT
	I0409 00:52:57.285495    2144 round_trippers.go:587]     Audit-Id: e6e7086b-6379-4cbd-bda9-376d5e92cb88
	I0409 00:52:57.285495    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:57.285495    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:57.285495    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:57.285495    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:57.285695    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fd 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 31 37 38 00 42  |bd39faf32.6178.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14290 chars]
	 >
	I0409 00:52:57.781186    2144 type.go:168] "Request Body" body=""
	I0409 00:52:57.781186    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:57.781186    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:57.781186    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:57.781186    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:57.785819    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:52:57.785933    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:57.785933    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:57.785933    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:57.785992    2144 round_trippers.go:587]     Content-Length: 3092
	I0409 00:52:57.785992    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:57 GMT
	I0409 00:52:57.785992    2144 round_trippers.go:587]     Audit-Id: dda7c854-4245-45aa-b361-0638ba609800
	I0409 00:52:57.785992    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:57.785992    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:57.786092    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fd 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 31 37 38 00 42  |bd39faf32.6178.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14290 chars]
	 >
	I0409 00:52:58.281487    2144 type.go:168] "Request Body" body=""
	I0409 00:52:58.281561    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:58.281668    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:58.281668    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:58.281668    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:58.289671    2144 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0409 00:52:58.289755    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:58.289847    2144 round_trippers.go:587]     Content-Length: 3092
	I0409 00:52:58.289847    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:58 GMT
	I0409 00:52:58.289847    2144 round_trippers.go:587]     Audit-Id: c8d7d712-6f66-4e7e-920e-9216923bb302
	I0409 00:52:58.289847    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:58.289874    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:58.289887    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:58.289887    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:58.290033    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fd 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 31 37 38 00 42  |bd39faf32.6178.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14290 chars]
	 >
	I0409 00:52:58.780798    2144 type.go:168] "Request Body" body=""
	I0409 00:52:58.780798    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:58.780798    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:58.780798    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:58.780798    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:58.785498    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:52:58.785498    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:58.785574    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:58 GMT
	I0409 00:52:58.785574    2144 round_trippers.go:587]     Audit-Id: cd50ae1e-37b0-412b-a233-8d3efe81d444
	I0409 00:52:58.785574    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:58.785574    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:58.785574    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:58.785664    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:58.785664    2144 round_trippers.go:587]     Content-Length: 3092
	I0409 00:52:58.785753    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fd 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 31 37 38 00 42  |bd39faf32.6178.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14290 chars]
	 >
	I0409 00:52:58.785753    2144 node_ready.go:53] node "multinode-611500-m02" has status "Ready":"False"
	I0409 00:52:59.280215    2144 type.go:168] "Request Body" body=""
	I0409 00:52:59.280215    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:59.280215    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:59.280215    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:59.280215    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:59.284844    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:52:59.284929    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:59.284929    2144 round_trippers.go:587]     Audit-Id: 87a1ccce-58d6-42dd-99c2-639033802df7
	I0409 00:52:59.284929    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:59.284929    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:59.284929    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:59.284929    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:59.284929    2144 round_trippers.go:587]     Content-Length: 3092
	I0409 00:52:59.285012    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:59 GMT
	I0409 00:52:59.285268    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fd 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 31 37 38 00 42  |bd39faf32.6178.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14290 chars]
	 >
	I0409 00:52:59.780895    2144 type.go:168] "Request Body" body=""
	I0409 00:52:59.780895    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:52:59.780895    2144 round_trippers.go:476] Request Headers:
	I0409 00:52:59.780895    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:52:59.780895    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:52:59.785645    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:52:59.785721    2144 round_trippers.go:584] Response Headers:
	I0409 00:52:59.785721    2144 round_trippers.go:587]     Audit-Id: a62344eb-d104-4ca6-950c-8783a3f3d5fc
	I0409 00:52:59.785721    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:52:59.785779    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:52:59.785779    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:52:59.785779    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:52:59.785779    2144 round_trippers.go:587]     Content-Length: 3092
	I0409 00:52:59.785779    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:52:59 GMT
	I0409 00:52:59.786068    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fd 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 31 37 38 00 42  |bd39faf32.6178.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14290 chars]
	 >
	I0409 00:53:00.280387    2144 type.go:168] "Request Body" body=""
	I0409 00:53:00.280387    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:53:00.280387    2144 round_trippers.go:476] Request Headers:
	I0409 00:53:00.280387    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:53:00.280387    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:53:00.285309    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:53:00.285356    2144 round_trippers.go:584] Response Headers:
	I0409 00:53:00.285356    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:53:00.285356    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:53:00.285356    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:53:00.285356    2144 round_trippers.go:587]     Content-Length: 3092
	I0409 00:53:00.285356    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:53:00 GMT
	I0409 00:53:00.285356    2144 round_trippers.go:587]     Audit-Id: c6568641-3f59-4ad5-a292-3bfa0036d171
	I0409 00:53:00.285356    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:53:00.285356    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fd 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 31 37 38 00 42  |bd39faf32.6178.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14290 chars]
	 >
	I0409 00:53:00.780415    2144 type.go:168] "Request Body" body=""
	I0409 00:53:00.780415    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:53:00.780415    2144 round_trippers.go:476] Request Headers:
	I0409 00:53:00.780415    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:53:00.780415    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:53:00.785212    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:53:00.785212    2144 round_trippers.go:584] Response Headers:
	I0409 00:53:00.785212    2144 round_trippers.go:587]     Audit-Id: f1df2e2e-9e22-48c6-8cf3-3e4cffb55bfe
	I0409 00:53:00.785212    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:53:00.785212    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:53:00.785212    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:53:00.785212    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:53:00.785212    2144 round_trippers.go:587]     Content-Length: 3092
	I0409 00:53:00.785212    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:53:00 GMT
	I0409 00:53:00.785472    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fd 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 31 37 38 00 42  |bd39faf32.6178.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14290 chars]
	 >
	I0409 00:53:00.785849    2144 node_ready.go:53] node "multinode-611500-m02" has status "Ready":"False"
	I0409 00:53:01.281030    2144 type.go:168] "Request Body" body=""
	I0409 00:53:01.281030    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:53:01.281030    2144 round_trippers.go:476] Request Headers:
	I0409 00:53:01.281030    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:53:01.281030    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:53:01.285376    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:53:01.285840    2144 round_trippers.go:584] Response Headers:
	I0409 00:53:01.285840    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:53:01 GMT
	I0409 00:53:01.285840    2144 round_trippers.go:587]     Audit-Id: 6b6f41c9-f6be-4fc3-acfb-f7a66bd44bc7
	I0409 00:53:01.285840    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:53:01.285840    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:53:01.285840    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:53:01.285840    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:53:01.285840    2144 round_trippers.go:587]     Content-Length: 3092
	I0409 00:53:01.286178    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fd 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 31 37 38 00 42  |bd39faf32.6178.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14290 chars]
	 >
	I0409 00:53:01.780317    2144 type.go:168] "Request Body" body=""
	I0409 00:53:01.780976    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:53:01.780976    2144 round_trippers.go:476] Request Headers:
	I0409 00:53:01.780976    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:53:01.780976    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:53:01.784382    2144 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 00:53:01.784482    2144 round_trippers.go:584] Response Headers:
	I0409 00:53:01.784482    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:53:01.784482    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:53:01.784482    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:53:01.784482    2144 round_trippers.go:587]     Content-Length: 3092
	I0409 00:53:01.784482    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:53:01 GMT
	I0409 00:53:01.784482    2144 round_trippers.go:587]     Audit-Id: d683ac55-8af6-4bb7-bb78-bd0dec2ff868
	I0409 00:53:01.784482    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:53:01.784786    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fd 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 31 37 38 00 42  |bd39faf32.6178.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14290 chars]
	 >
	I0409 00:53:02.280440    2144 type.go:168] "Request Body" body=""
	I0409 00:53:02.280440    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:53:02.280440    2144 round_trippers.go:476] Request Headers:
	I0409 00:53:02.280440    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:53:02.280440    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:53:02.284736    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:53:02.284812    2144 round_trippers.go:584] Response Headers:
	I0409 00:53:02.284812    2144 round_trippers.go:587]     Content-Length: 2970
	I0409 00:53:02.284812    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:53:02 GMT
	I0409 00:53:02.284812    2144 round_trippers.go:587]     Audit-Id: 0da454e0-a687-48bd-8575-7cede856b2c4
	I0409 00:53:02.284812    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:53:02.284812    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:53:02.284812    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:53:02.284812    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:53:02.285098    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 83 17 0a af 0f 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 34 37 38 00 42  |bd39faf32.6478.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 13664 chars]
	 >
	I0409 00:53:02.285304    2144 node_ready.go:49] node "multinode-611500-m02" has status "Ready":"True"
	I0409 00:53:02.285304    2144 node_ready.go:38] duration metric: took 28.0052596s for node "multinode-611500-m02" to be "Ready" ...
	I0409 00:53:02.285304    2144 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0409 00:53:02.285404    2144 type.go:204] "Request Body" body=""
	I0409 00:53:02.285404    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/namespaces/kube-system/pods
	I0409 00:53:02.285404    2144 round_trippers.go:476] Request Headers:
	I0409 00:53:02.285404    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:53:02.285404    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:53:02.289078    2144 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 00:53:02.289078    2144 round_trippers.go:584] Response Headers:
	I0409 00:53:02.289078    2144 round_trippers.go:587]     Audit-Id: cacc7000-b9f9-4a38-85d6-8d5bafe54d5e
	I0409 00:53:02.289078    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:53:02.289143    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:53:02.289143    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:53:02.289143    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:53:02.289143    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:53:02 GMT
	I0409 00:53:02.290962    2144 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 b7 94 03 0a  09 0a 00 12 03 36 34 37  |ist..........647|
		00000020  1a 00 12 d4 27 0a ae 19  0a 18 63 6f 72 65 64 6e  |....'.....coredn|
		00000030  73 2d 36 36 38 64 36 62  66 39 62 63 2d 64 35 34  |s-668d6bf9bc-d54|
		00000040  73 34 12 13 63 6f 72 65  64 6e 73 2d 36 36 38 64  |s4..coredns-668d|
		00000050  36 62 66 39 62 63 2d 1a  0b 6b 75 62 65 2d 73 79  |6bf9bc-..kube-sy|
		00000060  73 74 65 6d 22 00 2a 24  31 32 34 33 31 66 32 37  |stem".*$12431f27|
		00000070  2d 37 65 34 65 2d 34 31  63 39 2d 38 64 35 34 2d  |-7e4e-41c9-8d54-|
		00000080  62 63 37 62 65 32 30 37  34 62 39 63 32 03 34 33  |bc7be2074b9c2.43|
		00000090  36 38 00 42 08 08 96 88  d7 bf 06 10 00 5a 13 0a  |68.B.........Z..|
		000000a0  07 6b 38 73 2d 61 70 70  12 08 6b 75 62 65 2d 64  |.k8s-app..kube-d|
		000000b0  6e 73 5a 1f 0a 11 70 6f  64 2d 74 65 6d 70 6c 61  |nsZ...pod-templa|
		000000c0  74 65 2d 68 61 73 68 12  0a 36 36 38 64 36 62 66  |te-hash..668d6b [truncated 254764 chars]
	 >
	I0409 00:53:02.291690    2144 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-d54s4" in "kube-system" namespace to be "Ready" ...
	I0409 00:53:02.291690    2144 type.go:168] "Request Body" body=""
	I0409 00:53:02.291690    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-d54s4
	I0409 00:53:02.291690    2144 round_trippers.go:476] Request Headers:
	I0409 00:53:02.291690    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:53:02.291690    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:53:02.294863    2144 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 00:53:02.294863    2144 round_trippers.go:584] Response Headers:
	I0409 00:53:02.294863    2144 round_trippers.go:587]     Audit-Id: f6e9168c-1b97-4485-8a4a-da08a5186c93
	I0409 00:53:02.294863    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:53:02.294863    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:53:02.294863    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:53:02.294863    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:53:02.294863    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:53:02 GMT
	I0409 00:53:02.294863    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  d4 27 0a ae 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.'.....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 64 35 34 73 34 12  |68d6bf9bc-d54s4.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 31 32 34  33 31 66 32 37 2d 37 65  |m".*$12431f27-7e|
		00000060  34 65 2d 34 31 63 39 2d  38 64 35 34 2d 62 63 37  |4e-41c9-8d54-bc7|
		00000070  62 65 32 30 37 34 62 39  63 32 03 34 33 36 38 00  |be2074b9c2.4368.|
		00000080  42 08 08 96 88 d7 bf 06  10 00 5a 13 0a 07 6b 38  |B.........Z...k8|
		00000090  73 2d 61 70 70 12 08 6b  75 62 65 2d 64 6e 73 5a  |s-app..kube-dnsZ|
		000000a0  1f 0a 11 70 6f 64 2d 74  65 6d 70 6c 61 74 65 2d  |...pod-template-|
		000000b0  68 61 73 68 12 0a 36 36  38 64 36 62 66 39 62 63  |hash..668d6bf9bc|
		000000c0  6a 53 0a 0a 52 65 70 6c  69 63 61 53 65 74 1a 12  |jS..ReplicaSet. [truncated 24171 chars]
	 >
	I0409 00:53:02.295403    2144 type.go:168] "Request Body" body=""
	I0409 00:53:02.295585    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:53:02.295585    2144 round_trippers.go:476] Request Headers:
	I0409 00:53:02.295585    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:53:02.295585    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:53:02.300043    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:53:02.300043    2144 round_trippers.go:584] Response Headers:
	I0409 00:53:02.300043    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:53:02.300043    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:53:02.300043    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:53:02.300043    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:53:02 GMT
	I0409 00:53:02.300043    2144 round_trippers.go:587]     Audit-Id: ee02af37-8f3b-47f9-8b87-6e4689c6f118
	I0409 00:53:02.300043    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:53:02.300523    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d9 22 0a 8a 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 34 34  36 38 00 42 08 08 8d 88  |34242.4468.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21017 chars]
	 >
	I0409 00:53:02.300580    2144 pod_ready.go:93] pod "coredns-668d6bf9bc-d54s4" in "kube-system" namespace has status "Ready":"True"
	I0409 00:53:02.300580    2144 pod_ready.go:82] duration metric: took 8.8891ms for pod "coredns-668d6bf9bc-d54s4" in "kube-system" namespace to be "Ready" ...
	I0409 00:53:02.300580    2144 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-611500" in "kube-system" namespace to be "Ready" ...
	I0409 00:53:02.300580    2144 type.go:168] "Request Body" body=""
	I0409 00:53:02.300580    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-611500
	I0409 00:53:02.300580    2144 round_trippers.go:476] Request Headers:
	I0409 00:53:02.300580    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:53:02.300580    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:53:02.303855    2144 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 00:53:02.303855    2144 round_trippers.go:584] Response Headers:
	I0409 00:53:02.303922    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:53:02.303922    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:53:02 GMT
	I0409 00:53:02.303922    2144 round_trippers.go:587]     Audit-Id: 0ade36e3-2085-4938-bb8f-97a9724d7199
	I0409 00:53:02.303922    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:53:02.303955    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:53:02.303955    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:53:02.304098    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 2b 0a a0 1a 0a 15 65  74 63 64 2d 6d 75 6c 74  |.+.....etcd-mult|
		00000020  69 6e 6f 64 65 2d 36 31  31 35 30 30 12 00 1a 0b  |inode-611500....|
		00000030  6b 75 62 65 2d 73 79 73  74 65 6d 22 00 2a 24 36  |kube-system".*$6|
		00000040  32 32 64 39 61 61 61 2d  31 66 32 66 2d 34 33 35  |22d9aaa-1f2f-435|
		00000050  63 2d 38 63 65 61 2d 62  35 33 62 61 64 62 61 32  |c-8cea-b53badba2|
		00000060  37 66 34 32 03 33 39 35  38 00 42 08 08 90 88 d7  |7f42.3958.B.....|
		00000070  bf 06 10 00 5a 11 0a 09  63 6f 6d 70 6f 6e 65 6e  |....Z...componen|
		00000080  74 12 04 65 74 63 64 5a  15 0a 04 74 69 65 72 12  |t..etcdZ...tier.|
		00000090  0d 63 6f 6e 74 72 6f 6c  2d 70 6c 61 6e 65 62 50  |.control-planebP|
		000000a0  0a 30 6b 75 62 65 61 64  6d 2e 6b 75 62 65 72 6e  |.0kubeadm.kubern|
		000000b0  65 74 65 73 2e 69 6f 2f  65 74 63 64 2e 61 64 76  |etes.io/etcd.adv|
		000000c0  65 72 74 69 73 65 2d 63  6c 69 65 6e 74 2d 75 72  |ertise-client-u [truncated 26543 chars]
	 >
	I0409 00:53:02.304699    2144 type.go:168] "Request Body" body=""
	I0409 00:53:02.304763    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:53:02.304886    2144 round_trippers.go:476] Request Headers:
	I0409 00:53:02.304886    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:53:02.304928    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:53:02.309462    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:53:02.309462    2144 round_trippers.go:584] Response Headers:
	I0409 00:53:02.309462    2144 round_trippers.go:587]     Audit-Id: db14eb10-91f6-48b5-96ff-c6e8b5ecb4ff
	I0409 00:53:02.309462    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:53:02.309553    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:53:02.309553    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:53:02.309553    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:53:02.309553    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:53:02 GMT
	I0409 00:53:02.310183    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d9 22 0a 8a 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 34 34  36 38 00 42 08 08 8d 88  |34242.4468.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21017 chars]
	 >
	I0409 00:53:02.310395    2144 pod_ready.go:93] pod "etcd-multinode-611500" in "kube-system" namespace has status "Ready":"True"
	I0409 00:53:02.310395    2144 pod_ready.go:82] duration metric: took 9.8158ms for pod "etcd-multinode-611500" in "kube-system" namespace to be "Ready" ...
	I0409 00:53:02.310481    2144 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-611500" in "kube-system" namespace to be "Ready" ...
	I0409 00:53:02.310555    2144 type.go:168] "Request Body" body=""
	I0409 00:53:02.310555    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-611500
	I0409 00:53:02.310681    2144 round_trippers.go:476] Request Headers:
	I0409 00:53:02.310681    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:53:02.310681    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:53:02.312861    2144 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 00:53:02.313574    2144 round_trippers.go:584] Response Headers:
	I0409 00:53:02.313574    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:53:02.313633    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:53:02.313633    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:53:02 GMT
	I0409 00:53:02.313633    2144 round_trippers.go:587]     Audit-Id: 8401bfe9-bba6-4179-9515-b8390ee4d67b
	I0409 00:53:02.313633    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:53:02.313633    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:53:02.314195    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  99 34 0a b0 1c 0a 1f 6b  75 62 65 2d 61 70 69 73  |.4.....kube-apis|
		00000020  65 72 76 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |erver-multinode-|
		00000030  36 31 31 35 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |611500....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 35 30 31 39 36 37 37  |ystem".*$5019677|
		00000050  35 2d 62 63 30 63 2d 34  31 63 31 2d 62 33 36 63  |5-bc0c-41c1-b36c|
		00000060  2d 31 39 33 36 39 35 64  32 64 62 32 33 32 03 33  |-193695d2db232.3|
		00000070  39 31 38 00 42 08 08 90  88 d7 bf 06 10 00 5a 1b  |918.B.........Z.|
		00000080  0a 09 63 6f 6d 70 6f 6e  65 6e 74 12 0e 6b 75 62  |..component..kub|
		00000090  65 2d 61 70 69 73 65 72  76 65 72 5a 15 0a 04 74  |e-apiserverZ...t|
		000000a0  69 65 72 12 0d 63 6f 6e  74 72 6f 6c 2d 70 6c 61  |ier..control-pla|
		000000b0  6e 65 62 57 0a 3f 6b 75  62 65 61 64 6d 2e 6b 75  |nebW.?kubeadm.ku|
		000000c0  62 65 72 6e 65 74 65 73  2e 69 6f 2f 6b 75 62 65  |bernetes.io/kub [truncated 32076 chars]
	 >
	I0409 00:53:02.314480    2144 type.go:168] "Request Body" body=""
	I0409 00:53:02.314559    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:53:02.314610    2144 round_trippers.go:476] Request Headers:
	I0409 00:53:02.314610    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:53:02.314637    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:53:02.317936    2144 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 00:53:02.318009    2144 round_trippers.go:584] Response Headers:
	I0409 00:53:02.318009    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:53:02 GMT
	I0409 00:53:02.318009    2144 round_trippers.go:587]     Audit-Id: 3f0dad50-5909-4ca5-b194-22fccad9df1f
	I0409 00:53:02.318009    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:53:02.318094    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:53:02.318094    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:53:02.318094    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:53:02.318612    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d9 22 0a 8a 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 34 34  36 38 00 42 08 08 8d 88  |34242.4468.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21017 chars]
	 >
	I0409 00:53:02.318749    2144 pod_ready.go:93] pod "kube-apiserver-multinode-611500" in "kube-system" namespace has status "Ready":"True"
	I0409 00:53:02.318818    2144 pod_ready.go:82] duration metric: took 8.3366ms for pod "kube-apiserver-multinode-611500" in "kube-system" namespace to be "Ready" ...
	I0409 00:53:02.318818    2144 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-611500" in "kube-system" namespace to be "Ready" ...
	I0409 00:53:02.319087    2144 type.go:168] "Request Body" body=""
	I0409 00:53:02.319153    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 00:53:02.319192    2144 round_trippers.go:476] Request Headers:
	I0409 00:53:02.319229    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:53:02.319229    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:53:02.325418    2144 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0409 00:53:02.325502    2144 round_trippers.go:584] Response Headers:
	I0409 00:53:02.325502    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:53:02.325502    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:53:02.325502    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:53:02 GMT
	I0409 00:53:02.325502    2144 round_trippers.go:587]     Audit-Id: 43f27fda-bc78-4880-a478-a512bcbd19a9
	I0409 00:53:02.325546    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:53:02.325546    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:53:02.326094    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  f5 30 0a 9b 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.0....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 03  33 38 38 38 00 42 08 08  |ec96062.3888.B..|
		00000080  90 88 d7 bf 06 10 00 5a  24 0a 09 63 6f 6d 70 6f  |.......Z$..compo|
		00000090  6e 65 6e 74 12 17 6b 75  62 65 2d 63 6f 6e 74 72  |nent..kube-contr|
		000000a0  6f 6c 6c 65 72 2d 6d 61  6e 61 67 65 72 5a 15 0a  |oller-managerZ..|
		000000b0  04 74 69 65 72 12 0d 63  6f 6e 74 72 6f 6c 2d 70  |.tier..control-p|
		000000c0  6c 61 6e 65 62 3d 0a 19  6b 75 62 65 72 6e 65 74  |laneb=..kuberne [truncated 30018 chars]
	 >
	I0409 00:53:02.326094    2144 type.go:168] "Request Body" body=""
	I0409 00:53:02.326094    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:53:02.326094    2144 round_trippers.go:476] Request Headers:
	I0409 00:53:02.326094    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:53:02.326094    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:53:02.328947    2144 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 00:53:02.328947    2144 round_trippers.go:584] Response Headers:
	I0409 00:53:02.328947    2144 round_trippers.go:587]     Audit-Id: 55485d57-5dee-436d-ac0c-a9f3b43ed515
	I0409 00:53:02.328947    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:53:02.328947    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:53:02.328947    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:53:02.328947    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:53:02.328947    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:53:02 GMT
	I0409 00:53:02.328947    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d9 22 0a 8a 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 34 34  36 38 00 42 08 08 8d 88  |34242.4468.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21017 chars]
	 >
	I0409 00:53:02.328947    2144 pod_ready.go:93] pod "kube-controller-manager-multinode-611500" in "kube-system" namespace has status "Ready":"True"
	I0409 00:53:02.328947    2144 pod_ready.go:82] duration metric: took 9.942ms for pod "kube-controller-manager-multinode-611500" in "kube-system" namespace to be "Ready" ...
	I0409 00:53:02.328947    2144 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bhjnx" in "kube-system" namespace to be "Ready" ...
	I0409 00:53:02.328947    2144 type.go:168] "Request Body" body=""
	I0409 00:53:02.481435    2144 request.go:661] Waited for 152.4859ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.113.157:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bhjnx
	I0409 00:53:02.481435    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bhjnx
	I0409 00:53:02.481435    2144 round_trippers.go:476] Request Headers:
	I0409 00:53:02.481435    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:53:02.481435    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:53:02.485948    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:53:02.486474    2144 round_trippers.go:584] Response Headers:
	I0409 00:53:02.486474    2144 round_trippers.go:587]     Audit-Id: f89869e1-093d-4c02-8041-4fb09637c291
	I0409 00:53:02.486474    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:53:02.486474    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:53:02.486474    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:53:02.486474    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:53:02.486474    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:53:02 GMT
	I0409 00:53:02.486927    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  af 25 0a c1 14 0a 10 6b  75 62 65 2d 70 72 6f 78  |.%.....kube-prox|
		00000020  79 2d 62 68 6a 6e 78 12  0b 6b 75 62 65 2d 70 72  |y-bhjnx..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 61 66 62  36 64 61 39 39 2d 64 65  |m".*$afb6da99-de|
		00000050  39 39 2d 34 39 63 34 2d  62 30 38 30 2d 38 35 30  |99-49c4-b080-850|
		00000060  30 62 34 62 30 38 64 39  62 32 03 36 32 35 38 00  |0b4b08d9b2.6258.|
		00000070  42 08 08 d1 89 d7 bf 06  10 00 5a 26 0a 18 63 6f  |B.........Z&..co|
		00000080  6e 74 72 6f 6c 6c 65 72  2d 72 65 76 69 73 69 6f  |ntroller-revisio|
		00000090  6e 2d 68 61 73 68 12 0a  37 62 62 38 34 63 34 39  |n-hash..7bb84c49|
		000000a0  38 34 5a 15 0a 07 6b 38  73 2d 61 70 70 12 0a 6b  |84Z...k8s-app..k|
		000000b0  75 62 65 2d 70 72 6f 78  79 5a 1c 0a 17 70 6f 64  |ube-proxyZ...pod|
		000000c0  2d 74 65 6d 70 6c 61 74  65 2d 67 65 6e 65 72 61  |-template-gener [truncated 22744 chars]
	 >
	I0409 00:53:02.487144    2144 type.go:168] "Request Body" body=""
	I0409 00:53:02.680777    2144 request.go:661] Waited for 193.6301ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:53:02.680777    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500-m02
	I0409 00:53:02.680777    2144 round_trippers.go:476] Request Headers:
	I0409 00:53:02.680777    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:53:02.680777    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:53:02.684393    2144 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 00:53:02.684452    2144 round_trippers.go:584] Response Headers:
	I0409 00:53:02.684452    2144 round_trippers.go:587]     Audit-Id: f3926084-e3e6-4744-971f-8c113cf3fce0
	I0409 00:53:02.684452    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:53:02.684452    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:53:02.684452    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:53:02.684452    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:53:02.684452    2144 round_trippers.go:587]     Content-Length: 2970
	I0409 00:53:02.684452    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:53:02 GMT
	I0409 00:53:02.684688    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 83 17 0a af 0f 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 03 36 34 37 38 00 42  |bd39faf32.6478.B|
		00000060  08 08 d1 89 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 13664 chars]
	 >
	I0409 00:53:02.684853    2144 pod_ready.go:93] pod "kube-proxy-bhjnx" in "kube-system" namespace has status "Ready":"True"
	I0409 00:53:02.684907    2144 pod_ready.go:82] duration metric: took 355.9552ms for pod "kube-proxy-bhjnx" in "kube-system" namespace to be "Ready" ...
	I0409 00:53:02.684907    2144 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zxxgf" in "kube-system" namespace to be "Ready" ...
	I0409 00:53:02.685001    2144 type.go:168] "Request Body" body=""
	I0409 00:53:02.880685    2144 request.go:661] Waited for 195.6282ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.113.157:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zxxgf
	I0409 00:53:02.881103    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zxxgf
	I0409 00:53:02.881103    2144 round_trippers.go:476] Request Headers:
	I0409 00:53:02.881103    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:53:02.881103    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:53:02.884407    2144 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 00:53:02.884436    2144 round_trippers.go:584] Response Headers:
	I0409 00:53:02.884436    2144 round_trippers.go:587]     Audit-Id: 4906a4d0-5db2-4cf1-950e-cad1b25e3451
	I0409 00:53:02.884436    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:53:02.884436    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:53:02.884436    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:53:02.884516    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:53:02.884516    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:53:02 GMT
	I0409 00:53:02.884926    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  a7 25 0a c1 14 0a 10 6b  75 62 65 2d 70 72 6f 78  |.%.....kube-prox|
		00000020  79 2d 7a 78 78 67 66 12  0b 6b 75 62 65 2d 70 72  |y-zxxgf..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 33 35 30  36 65 65 65 37 2d 64 39  |m".*$3506eee7-d9|
		00000050  34 36 2d 34 64 64 65 2d  39 31 63 39 2d 39 66 63  |46-4dde-91c9-9fc|
		00000060  35 63 31 34 37 34 34 33  34 32 03 33 39 32 38 00  |5c14744342.3928.|
		00000070  42 08 08 96 88 d7 bf 06  10 00 5a 26 0a 18 63 6f  |B.........Z&..co|
		00000080  6e 74 72 6f 6c 6c 65 72  2d 72 65 76 69 73 69 6f  |ntroller-revisio|
		00000090  6e 2d 68 61 73 68 12 0a  37 62 62 38 34 63 34 39  |n-hash..7bb84c49|
		000000a0  38 34 5a 15 0a 07 6b 38  73 2d 61 70 70 12 0a 6b  |84Z...k8s-app..k|
		000000b0  75 62 65 2d 70 72 6f 78  79 5a 1c 0a 17 70 6f 64  |ube-proxyZ...pod|
		000000c0  2d 74 65 6d 70 6c 61 74  65 2d 67 65 6e 65 72 61  |-template-gener [truncated 22673 chars]
	 >
	I0409 00:53:02.885247    2144 type.go:168] "Request Body" body=""
	I0409 00:53:03.081083    2144 request.go:661] Waited for 195.7827ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:53:03.081083    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:53:03.081083    2144 round_trippers.go:476] Request Headers:
	I0409 00:53:03.081083    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:53:03.081083    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:53:03.085084    2144 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 00:53:03.085084    2144 round_trippers.go:584] Response Headers:
	I0409 00:53:03.085084    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:53:03.085084    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:53:03 GMT
	I0409 00:53:03.085084    2144 round_trippers.go:587]     Audit-Id: 30faa464-41ac-4433-ac2b-6374cc4afbcb
	I0409 00:53:03.085084    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:53:03.085084    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:53:03.085084    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:53:03.085566    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d9 22 0a 8a 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 34 34  36 38 00 42 08 08 8d 88  |34242.4468.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21017 chars]
	 >
	I0409 00:53:03.085773    2144 pod_ready.go:93] pod "kube-proxy-zxxgf" in "kube-system" namespace has status "Ready":"True"
	I0409 00:53:03.085853    2144 pod_ready.go:82] duration metric: took 400.8914ms for pod "kube-proxy-zxxgf" in "kube-system" namespace to be "Ready" ...
	I0409 00:53:03.085853    2144 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-611500" in "kube-system" namespace to be "Ready" ...
	I0409 00:53:03.085923    2144 type.go:168] "Request Body" body=""
	I0409 00:53:03.281067    2144 request.go:661] Waited for 195.0635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.113.157:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-611500
	I0409 00:53:03.281067    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-611500
	I0409 00:53:03.281544    2144 round_trippers.go:476] Request Headers:
	I0409 00:53:03.281544    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:53:03.281544    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:53:03.285968    2144 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 00:53:03.286069    2144 round_trippers.go:584] Response Headers:
	I0409 00:53:03.286069    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:53:03.286069    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:53:03 GMT
	I0409 00:53:03.286069    2144 round_trippers.go:587]     Audit-Id: b439036f-5bf0-4b5f-9f72-f6604ea14dfc
	I0409 00:53:03.286069    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:53:03.286069    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:53:03.286139    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:53:03.286975    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  80 23 0a 83 18 0a 1f 6b  75 62 65 2d 73 63 68 65  |.#.....kube-sche|
		00000020  64 75 6c 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |duler-multinode-|
		00000030  36 31 31 35 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |611500....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 39 31 38 35 64 35 63  |ystem".*$9185d5c|
		00000050  30 2d 62 32 38 61 2d 34  33 38 63 2d 62 30 35 61  |0-b28a-438c-b05a|
		00000060  2d 36 34 36 36 37 65 34  61 63 33 64 37 32 03 33  |-64667e4ac3d72.3|
		00000070  38 33 38 00 42 08 08 90  88 d7 bf 06 10 00 5a 1b  |838.B.........Z.|
		00000080  0a 09 63 6f 6d 70 6f 6e  65 6e 74 12 0e 6b 75 62  |..component..kub|
		00000090  65 2d 73 63 68 65 64 75  6c 65 72 5a 15 0a 04 74  |e-schedulerZ...t|
		000000a0  69 65 72 12 0d 63 6f 6e  74 72 6f 6c 2d 70 6c 61  |ier..control-pla|
		000000b0  6e 65 62 3d 0a 19 6b 75  62 65 72 6e 65 74 65 73  |neb=..kubernetes|
		000000c0  2e 69 6f 2f 63 6f 6e 66  69 67 2e 68 61 73 68 12  |.io/config.hash [truncated 21244 chars]
	 >
	I0409 00:53:03.287233    2144 type.go:168] "Request Body" body=""
	I0409 00:53:03.480931    2144 request.go:661] Waited for 193.6956ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:53:03.481414    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes/multinode-611500
	I0409 00:53:03.481414    2144 round_trippers.go:476] Request Headers:
	I0409 00:53:03.481414    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:53:03.481483    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:53:03.486725    2144 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0409 00:53:03.486725    2144 round_trippers.go:584] Response Headers:
	I0409 00:53:03.486783    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:53:03.486783    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:53:03.486783    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:53:03.486806    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:53:03 GMT
	I0409 00:53:03.486806    2144 round_trippers.go:587]     Audit-Id: d0dcafc6-875a-489f-8cf6-154bfe8c0e90
	I0409 00:53:03.486806    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:53:03.487145    2144 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d9 22 0a 8a 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 03 34 34  36 38 00 42 08 08 8d 88  |34242.4468.B....|
		00000060  d7 bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21017 chars]
	 >
	I0409 00:53:03.487356    2144 pod_ready.go:93] pod "kube-scheduler-multinode-611500" in "kube-system" namespace has status "Ready":"True"
	I0409 00:53:03.487411    2144 pod_ready.go:82] duration metric: took 401.4823ms for pod "kube-scheduler-multinode-611500" in "kube-system" namespace to be "Ready" ...
	I0409 00:53:03.487411    2144 pod_ready.go:39] duration metric: took 1.2020209s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0409 00:53:03.487411    2144 system_svc.go:44] waiting for kubelet service to be running ....
	I0409 00:53:03.499955    2144 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0409 00:53:03.523661    2144 system_svc.go:56] duration metric: took 36.2491ms WaitForService to wait for kubelet
	I0409 00:53:03.523661    2144 kubeadm.go:582] duration metric: took 29.4804036s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0409 00:53:03.523661    2144 node_conditions.go:102] verifying NodePressure condition ...
	I0409 00:53:03.523661    2144 type.go:204] "Request Body" body=""
	I0409 00:53:03.681068    2144 request.go:661] Waited for 157.4048ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.113.157:8443/api/v1/nodes
	I0409 00:53:03.681068    2144 round_trippers.go:470] GET https://192.168.113.157:8443/api/v1/nodes
	I0409 00:53:03.681068    2144 round_trippers.go:476] Request Headers:
	I0409 00:53:03.681068    2144 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 00:53:03.681068    2144 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 00:53:03.683679    2144 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 00:53:03.684376    2144 round_trippers.go:584] Response Headers:
	I0409 00:53:03.684509    2144 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 00:53:03.684509    2144 round_trippers.go:587]     Date: Wed, 09 Apr 2025 00:53:03 GMT
	I0409 00:53:03.684509    2144 round_trippers.go:587]     Audit-Id: 6810aef9-7901-43ff-885f-4b7ed97692a5
	I0409 00:53:03.684509    2144 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 00:53:03.684509    2144 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 00:53:03.684720    2144 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 00:53:03.685371    2144 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0e 0a 02  76 31 12 08 4e 6f 64 65  |k8s.....v1..Node|
		00000010  4c 69 73 74 12 ed 39 0a  09 0a 00 12 03 36 34 39  |List..9......649|
		00000020  1a 00 12 d9 22 0a 8a 11  0a 10 6d 75 6c 74 69 6e  |....".....multin|
		00000030  6f 64 65 2d 36 31 31 35  30 30 12 00 1a 00 22 00  |ode-611500....".|
		00000040  2a 24 62 31 32 35 32 66  34 61 2d 32 32 33 30 2d  |*$b1252f4a-2230-|
		00000050  34 36 61 36 2d 39 33 38  62 2d 37 63 30 37 31 31  |46a6-938b-7c0711|
		00000060  31 33 33 34 32 34 32 03  34 34 36 38 00 42 08 08  |1334242.4468.B..|
		00000070  8d 88 d7 bf 06 10 00 5a  20 0a 17 62 65 74 61 2e  |.......Z ..beta.|
		00000080  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		00000090  63 68 12 05 61 6d 64 36  34 5a 1e 0a 15 62 65 74  |ch..amd64Z...bet|
		000000a0  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		000000b0  6f 73 12 05 6c 69 6e 75  78 5a 1b 0a 12 6b 75 62  |os..linuxZ...kub|
		000000c0  65 72 6e 65 74 65 73 2e  69 6f 2f 61 72 63 68 12  |ernetes.io/arch [truncated 35703 chars]
	 >
	I0409 00:53:03.685731    2144 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0409 00:53:03.685731    2144 node_conditions.go:123] node cpu capacity is 2
	I0409 00:53:03.685849    2144 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0409 00:53:03.685849    2144 node_conditions.go:123] node cpu capacity is 2
	I0409 00:53:03.685849    2144 node_conditions.go:105] duration metric: took 162.1859ms to run NodePressure ...
	I0409 00:53:03.685849    2144 start.go:241] waiting for startup goroutines ...
	I0409 00:53:03.686040    2144 start.go:255] writing updated cluster config ...
	I0409 00:53:03.699905    2144 ssh_runner.go:195] Run: rm -f paused
	I0409 00:53:03.850965    2144 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0409 00:53:03.854937    2144 out.go:177] * Done! kubectl is now configured to use "multinode-611500" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 09 00:49:46 multinode-611500 dockerd[1454]: time="2025-04-09T00:49:46.713284417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 09 00:49:46 multinode-611500 dockerd[1454]: time="2025-04-09T00:49:46.720807279Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 09 00:49:46 multinode-611500 dockerd[1454]: time="2025-04-09T00:49:46.720879680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 09 00:49:46 multinode-611500 dockerd[1454]: time="2025-04-09T00:49:46.720906480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 09 00:49:46 multinode-611500 dockerd[1454]: time="2025-04-09T00:49:46.721452285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 09 00:49:46 multinode-611500 cri-dockerd[1347]: time="2025-04-09T00:49:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/38b71116bee4157d0d5ab39bc3ab3604a3eb24e25a3a3181ce21d0eb84b54daf/resolv.conf as [nameserver 192.168.112.1]"
	Apr 09 00:49:46 multinode-611500 cri-dockerd[1347]: time="2025-04-09T00:49:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5709459d3357ee2ec45d00e08374868c7d57bdcc834b507eac2020850e1934ca/resolv.conf as [nameserver 192.168.112.1]"
	Apr 09 00:49:47 multinode-611500 dockerd[1454]: time="2025-04-09T00:49:47.067710762Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 09 00:49:47 multinode-611500 dockerd[1454]: time="2025-04-09T00:49:47.067926264Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 09 00:49:47 multinode-611500 dockerd[1454]: time="2025-04-09T00:49:47.067991665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 09 00:49:47 multinode-611500 dockerd[1454]: time="2025-04-09T00:49:47.068485969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 09 00:49:47 multinode-611500 dockerd[1454]: time="2025-04-09T00:49:47.205338487Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 09 00:49:47 multinode-611500 dockerd[1454]: time="2025-04-09T00:49:47.205681290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 09 00:49:47 multinode-611500 dockerd[1454]: time="2025-04-09T00:49:47.205698290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 09 00:49:47 multinode-611500 dockerd[1454]: time="2025-04-09T00:49:47.205896792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 09 00:53:28 multinode-611500 dockerd[1454]: time="2025-04-09T00:53:28.874331052Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 09 00:53:28 multinode-611500 dockerd[1454]: time="2025-04-09T00:53:28.874480253Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 09 00:53:28 multinode-611500 dockerd[1454]: time="2025-04-09T00:53:28.874544154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 09 00:53:28 multinode-611500 dockerd[1454]: time="2025-04-09T00:53:28.875133660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 09 00:53:29 multinode-611500 cri-dockerd[1347]: time="2025-04-09T00:53:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b5dfc9645b5a9353ae53b32225ed4966d43eb5c4eb0fc876c4fe812a4cabb6a0/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 09 00:53:30 multinode-611500 cri-dockerd[1347]: time="2025-04-09T00:53:30Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 09 00:53:31 multinode-611500 dockerd[1454]: time="2025-04-09T00:53:31.118926189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 09 00:53:31 multinode-611500 dockerd[1454]: time="2025-04-09T00:53:31.119305494Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 09 00:53:31 multinode-611500 dockerd[1454]: time="2025-04-09T00:53:31.119360895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 09 00:53:31 multinode-611500 dockerd[1454]: time="2025-04-09T00:53:31.120000503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b2c663be115f5       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   50 seconds ago      Running             busybox                   0                   b5dfc9645b5a9       busybox-58667487b6-q97dd
	934a19227cebf       c69fa2e9cbf5f                                                                                         4 minutes ago       Running             coredns                   0                   5709459d3357e       coredns-668d6bf9bc-d54s4
	81bdf2c1b915f       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   38b71116bee41       storage-provisioner
	14703ff53a0b7       kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495              4 minutes ago       Running             kindnet-cni               0                   40c7183a37ea2       kindnet-vntlr
	1a9f657c2b5a3       f1332858868e1                                                                                         4 minutes ago       Running             kube-proxy                0                   0a2ad19ce50fc       kube-proxy-zxxgf
	8fec401b4d086       d8e673e7c9983                                                                                         5 minutes ago       Running             kube-scheduler            0                   77b1d88aa1629       kube-scheduler-multinode-611500
	45eca668cef55       a9e7e6b294baf                                                                                         5 minutes ago       Running             etcd                      0                   c41f8955903aa       etcd-multinode-611500
	729d2794ba86f       b6a454c5a800d                                                                                         5 minutes ago       Running             kube-controller-manager   0                   ac3e2538b3ca0       kube-controller-manager-multinode-611500
	9698a4747b5a1       85b7a174738ba                                                                                         5 minutes ago       Running             kube-apiserver            0                   bc594b9349b9c       kube-apiserver-multinode-611500
	
	
	==> coredns [934a19227ceb] <==
	[INFO] 10.244.1.2:51946 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000125001s
	[INFO] 10.244.0.3:43588 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145702s
	[INFO] 10.244.0.3:51460 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000099601s
	[INFO] 10.244.0.3:55687 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000278303s
	[INFO] 10.244.0.3:40394 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000516106s
	[INFO] 10.244.0.3:40522 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000223003s
	[INFO] 10.244.0.3:37860 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000171603s
	[INFO] 10.244.0.3:39917 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000301904s
	[INFO] 10.244.0.3:46701 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000169703s
	[INFO] 10.244.1.2:34733 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156902s
	[INFO] 10.244.1.2:58701 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000161202s
	[INFO] 10.244.1.2:40033 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000199402s
	[INFO] 10.244.1.2:46371 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072001s
	[INFO] 10.244.0.3:36931 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132602s
	[INFO] 10.244.0.3:33483 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000175203s
	[INFO] 10.244.0.3:38836 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000286804s
	[INFO] 10.244.0.3:37565 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000127601s
	[INFO] 10.244.1.2:40936 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000181303s
	[INFO] 10.244.1.2:36358 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000184102s
	[INFO] 10.244.1.2:44504 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000107402s
	[INFO] 10.244.1.2:55001 - 5 "PTR IN 1.112.168.192.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd 106 0.000108502s
	[INFO] 10.244.0.3:32994 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000206602s
	[INFO] 10.244.0.3:57902 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000174602s
	[INFO] 10.244.0.3:43398 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000120602s
	[INFO] 10.244.0.3:39057 - 5 "PTR IN 1.112.168.192.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd 106 0.000086501s
	
	
	==> describe nodes <==
	Name:               multinode-611500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-611500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd2f4c3eba2bd452b5997c855e28d0966165ba83
	                    minikube.k8s.io/name=multinode-611500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_09T00_49_22_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Apr 2025 00:49:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-611500
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Apr 2025 00:54:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Apr 2025 00:53:57 +0000   Wed, 09 Apr 2025 00:49:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Apr 2025 00:53:57 +0000   Wed, 09 Apr 2025 00:49:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Apr 2025 00:53:57 +0000   Wed, 09 Apr 2025 00:49:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Apr 2025 00:53:57 +0000   Wed, 09 Apr 2025 00:49:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.113.157
	  Hostname:    multinode-611500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 f5db79aa67674d499c7e6b8dadc9f171
	  System UUID:                e993950d-aeba-6b4b-885d-4b2e551f8dbc
	  Boot ID:                    505a4b3d-fe12-41b0-bce1-1e8370aada0c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-q97dd                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kube-system                 coredns-668d6bf9bc-d54s4                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m54s
	  kube-system                 etcd-multinode-611500                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m
	  kube-system                 kindnet-vntlr                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m54s
	  kube-system                 kube-apiserver-multinode-611500             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m
	  kube-system                 kube-controller-manager-multinode-611500    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m
	  kube-system                 kube-proxy-zxxgf                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 kube-scheduler-multinode-611500             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m52s                kube-proxy       
	  Normal  NodeHasSufficientMemory  5m7s (x8 over 5m7s)  kubelet          Node multinode-611500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m7s (x8 over 5m7s)  kubelet          Node multinode-611500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m7s (x7 over 5m7s)  kubelet          Node multinode-611500 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m                   kubelet          Node multinode-611500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m                   kubelet          Node multinode-611500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m                   kubelet          Node multinode-611500 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m55s                node-controller  Node multinode-611500 event: Registered Node multinode-611500 in Controller
	  Normal  NodeReady                4m34s                kubelet          Node multinode-611500 status is now: NodeReady
	
	
	Name:               multinode-611500-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-611500-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd2f4c3eba2bd452b5997c855e28d0966165ba83
	                    minikube.k8s.io/name=multinode-611500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_04_09T00_52_33_0700
	                    minikube.k8s.io/version=v1.35.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Apr 2025 00:52:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-611500-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Apr 2025 00:54:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Apr 2025 00:53:34 +0000   Wed, 09 Apr 2025 00:52:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Apr 2025 00:53:34 +0000   Wed, 09 Apr 2025 00:52:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Apr 2025 00:53:34 +0000   Wed, 09 Apr 2025 00:52:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Apr 2025 00:53:34 +0000   Wed, 09 Apr 2025 00:53:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.113.143
	  Hostname:    multinode-611500-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 ddab213e0f664ee998425119dc3e7a46
	  System UUID:                2b3ed102-bc59-9642-a45e-e1d26e5f9a17
	  Boot ID:                    6aa204db-bc4c-4b56-ad49-3d6e873355d1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-c426d    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kube-system                 kindnet-66fr6               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      107s
	  kube-system                 kube-proxy-bhjnx            0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 95s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  108s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     107s                 cidrAllocator    Node multinode-611500-m02 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  107s (x2 over 108s)  kubelet          Node multinode-611500-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    107s (x2 over 108s)  kubelet          Node multinode-611500-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     107s (x2 over 108s)  kubelet          Node multinode-611500-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           105s                 node-controller  Node multinode-611500-m02 event: Registered Node multinode-611500-m02 in Controller
	  Normal  NodeReady                78s                  kubelet          Node multinode-611500-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.031443] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr 9 00:48] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.159163] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[ +30.729293] systemd-fstab-generator[1011]: Ignoring "noauto" option for root device
	[  +0.100805] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.533161] systemd-fstab-generator[1050]: Ignoring "noauto" option for root device
	[  +0.189249] systemd-fstab-generator[1062]: Ignoring "noauto" option for root device
	[  +0.250696] systemd-fstab-generator[1076]: Ignoring "noauto" option for root device
	[  +2.820613] systemd-fstab-generator[1300]: Ignoring "noauto" option for root device
	[  +0.165425] systemd-fstab-generator[1312]: Ignoring "noauto" option for root device
	[  +0.194705] systemd-fstab-generator[1324]: Ignoring "noauto" option for root device
	[  +0.252644] systemd-fstab-generator[1339]: Ignoring "noauto" option for root device
	[Apr 9 00:49] systemd-fstab-generator[1438]: Ignoring "noauto" option for root device
	[  +0.120650] kauditd_printk_skb: 206 callbacks suppressed
	[  +3.514770] systemd-fstab-generator[1698]: Ignoring "noauto" option for root device
	[  +6.652302] systemd-fstab-generator[1851]: Ignoring "noauto" option for root device
	[  +0.104404] kauditd_printk_skb: 74 callbacks suppressed
	[  +7.535000] systemd-fstab-generator[2282]: Ignoring "noauto" option for root device
	[  +0.119733] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.646962] systemd-fstab-generator[2388]: Ignoring "noauto" option for root device
	[  +0.216349] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.148295] kauditd_printk_skb: 51 callbacks suppressed
	[Apr 9 00:53] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [45eca668cef5] <==
	{"level":"info","ts":"2025-04-09T00:49:15.934267Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-09T00:49:15.934468Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-09T00:49:15.927769Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-09T00:49:15.934845Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-09T00:49:15.935026Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-09T00:49:15.936875Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.113.157:2379"}
	{"level":"info","ts":"2025-04-09T00:49:29.824196Z","caller":"traceutil/trace.go:171","msg":"trace[741787946] transaction","detail":"{read_only:false; response_revision:395; number_of_response:1; }","duration":"159.417122ms","start":"2025-04-09T00:49:29.664763Z","end":"2025-04-09T00:49:29.824180Z","steps":["trace[741787946] 'process raft request'  (duration: 158.97022ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-09T00:49:34.125155Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"194.58927ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-611500\" limit:1 ","response":"range_response_count:1 size:4489"}
	{"level":"info","ts":"2025-04-09T00:49:34.125264Z","caller":"traceutil/trace.go:171","msg":"trace[451384899] range","detail":"{range_begin:/registry/minions/multinode-611500; range_end:; response_count:1; response_revision:398; }","duration":"194.76117ms","start":"2025-04-09T00:49:33.930490Z","end":"2025-04-09T00:49:34.125251Z","steps":["trace[451384899] 'range keys from in-memory index tree'  (duration: 194.501769ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-09T00:52:26.795736Z","caller":"traceutil/trace.go:171","msg":"trace[95306618] transaction","detail":"{read_only:false; response_revision:568; number_of_response:1; }","duration":"197.63551ms","start":"2025-04-09T00:52:26.598083Z","end":"2025-04-09T00:52:26.795718Z","steps":["trace[95306618] 'process raft request'  (duration: 197.505109ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-09T00:52:43.388704Z","caller":"traceutil/trace.go:171","msg":"trace[363065472] linearizableReadLoop","detail":"{readStateIndex:672; appliedIndex:671; }","duration":"147.305982ms","start":"2025-04-09T00:52:43.241380Z","end":"2025-04-09T00:52:43.388686Z","steps":["trace[363065472] 'read index received'  (duration: 147.201481ms)","trace[363065472] 'applied index is now lower than readState.Index'  (duration: 103.801µs)"],"step_count":2}
	{"level":"info","ts":"2025-04-09T00:52:43.389203Z","caller":"traceutil/trace.go:171","msg":"trace[35640874] transaction","detail":"{read_only:false; response_revision:617; number_of_response:1; }","duration":"167.762774ms","start":"2025-04-09T00:52:43.221428Z","end":"2025-04-09T00:52:43.389191Z","steps":["trace[35640874] 'process raft request'  (duration: 167.092268ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-09T00:52:43.389639Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"148.293391ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-node-lease/multinode-611500-m02\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-09T00:52:43.389798Z","caller":"traceutil/trace.go:171","msg":"trace[1359642762] range","detail":"{range_begin:/registry/leases/kube-node-lease/multinode-611500-m02; range_end:; response_count:0; response_revision:617; }","duration":"148.497294ms","start":"2025-04-09T00:52:43.241290Z","end":"2025-04-09T00:52:43.389788Z","steps":["trace[1359642762] 'agreement among raft nodes before linearized reading'  (duration: 148.278092ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-09T00:52:43.648171Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.383292ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10210106860146277653 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/multinode-611500-m02\" mod_revision:0 > success:<request_put:<key:\"/registry/leases/kube-node-lease/multinode-611500-m02\" value_size:508 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-04-09T00:52:43.648767Z","caller":"traceutil/trace.go:171","msg":"trace[1277658821] transaction","detail":"{read_only:false; response_revision:618; number_of_response:1; }","duration":"241.215864ms","start":"2025-04-09T00:52:43.407535Z","end":"2025-04-09T00:52:43.648750Z","steps":["trace[1277658821] 'process raft request'  (duration: 123.60706ms)","trace[1277658821] 'compare'  (duration: 116.258091ms)"],"step_count":2}
	{"level":"warn","ts":"2025-04-09T00:52:44.039180Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.365243ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-611500-m02\" limit:1 ","response":"range_response_count:1 size:3150"}
	{"level":"info","ts":"2025-04-09T00:52:44.039365Z","caller":"traceutil/trace.go:171","msg":"trace[765775033] range","detail":"{range_begin:/registry/minions/multinode-611500-m02; range_end:; response_count:1; response_revision:618; }","duration":"196.600945ms","start":"2025-04-09T00:52:43.842751Z","end":"2025-04-09T00:52:44.039352Z","steps":["trace[765775033] 'range keys from in-memory index tree'  (duration: 196.161741ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-09T00:52:49.140515Z","caller":"traceutil/trace.go:171","msg":"trace[1373688797] transaction","detail":"{read_only:false; response_revision:628; number_of_response:1; }","duration":"175.295656ms","start":"2025-04-09T00:52:48.965202Z","end":"2025-04-09T00:52:49.140498Z","steps":["trace[1373688797] 'process raft request'  (duration: 175.118654ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-09T00:52:50.091898Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"283.939383ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-611500-m02\" limit:1 ","response":"range_response_count:1 size:3150"}
	{"level":"info","ts":"2025-04-09T00:52:50.092371Z","caller":"traceutil/trace.go:171","msg":"trace[419034479] range","detail":"{range_begin:/registry/minions/multinode-611500-m02; range_end:; response_count:1; response_revision:629; }","duration":"284.449288ms","start":"2025-04-09T00:52:49.807907Z","end":"2025-04-09T00:52:50.092357Z","steps":["trace[419034479] 'range keys from in-memory index tree'  (duration: 283.703881ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-09T00:52:50.092067Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"284.010184ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.113.157\" limit:1 ","response":"range_response_count:1 size:138"}
	{"level":"info","ts":"2025-04-09T00:52:50.092917Z","caller":"traceutil/trace.go:171","msg":"trace[1267337887] range","detail":"{range_begin:/registry/masterleases/192.168.113.157; range_end:; response_count:1; response_revision:629; }","duration":"284.875192ms","start":"2025-04-09T00:52:49.808031Z","end":"2025-04-09T00:52:50.092906Z","steps":["trace[1267337887] 'range keys from in-memory index tree'  (duration: 283.821383ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-09T00:52:50.092103Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.517538ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/\" range_end:\"/registry/endpointslices0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2025-04-09T00:52:50.093207Z","caller":"traceutil/trace.go:171","msg":"trace[1884183128] range","detail":"{range_begin:/registry/endpointslices/; range_end:/registry/endpointslices0; response_count:0; response_revision:629; }","duration":"142.620049ms","start":"2025-04-09T00:52:49.950563Z","end":"2025-04-09T00:52:50.093183Z","steps":["trace[1884183128] 'count revisions from in-memory index tree'  (duration: 141.481638ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:54:20 up 7 min,  0 users,  load average: 0.14, 0.25, 0.15
	Linux multinode-611500 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [14703ff53a0b] <==
	I0409 00:53:15.583704       1 main.go:324] Node multinode-611500-m02 has CIDR [10.244.1.0/24] 
	I0409 00:53:25.583653       1 main.go:297] Handling node with IPs: map[192.168.113.157:{}]
	I0409 00:53:25.583821       1 main.go:301] handling current node
	I0409 00:53:25.583842       1 main.go:297] Handling node with IPs: map[192.168.113.143:{}]
	I0409 00:53:25.584208       1 main.go:324] Node multinode-611500-m02 has CIDR [10.244.1.0/24] 
	I0409 00:53:35.575027       1 main.go:297] Handling node with IPs: map[192.168.113.157:{}]
	I0409 00:53:35.575079       1 main.go:301] handling current node
	I0409 00:53:35.575099       1 main.go:297] Handling node with IPs: map[192.168.113.143:{}]
	I0409 00:53:35.575106       1 main.go:324] Node multinode-611500-m02 has CIDR [10.244.1.0/24] 
	I0409 00:53:45.582631       1 main.go:297] Handling node with IPs: map[192.168.113.157:{}]
	I0409 00:53:45.584492       1 main.go:301] handling current node
	I0409 00:53:45.584785       1 main.go:297] Handling node with IPs: map[192.168.113.143:{}]
	I0409 00:53:45.585125       1 main.go:324] Node multinode-611500-m02 has CIDR [10.244.1.0/24] 
	I0409 00:53:55.583072       1 main.go:297] Handling node with IPs: map[192.168.113.157:{}]
	I0409 00:53:55.583216       1 main.go:301] handling current node
	I0409 00:53:55.583260       1 main.go:297] Handling node with IPs: map[192.168.113.143:{}]
	I0409 00:53:55.583268       1 main.go:324] Node multinode-611500-m02 has CIDR [10.244.1.0/24] 
	I0409 00:54:05.575338       1 main.go:297] Handling node with IPs: map[192.168.113.157:{}]
	I0409 00:54:05.575703       1 main.go:301] handling current node
	I0409 00:54:05.575812       1 main.go:297] Handling node with IPs: map[192.168.113.143:{}]
	I0409 00:54:05.575842       1 main.go:324] Node multinode-611500-m02 has CIDR [10.244.1.0/24] 
	I0409 00:54:15.583423       1 main.go:297] Handling node with IPs: map[192.168.113.157:{}]
	I0409 00:54:15.583487       1 main.go:301] handling current node
	I0409 00:54:15.583507       1 main.go:297] Handling node with IPs: map[192.168.113.143:{}]
	I0409 00:54:15.583515       1 main.go:324] Node multinode-611500-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [9698a4747b5a] <==
	I0409 00:49:18.399822       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0409 00:49:18.408991       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0409 00:49:18.409020       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0409 00:49:19.576248       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0409 00:49:19.697017       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0409 00:49:19.808148       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0409 00:49:19.835064       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.113.157]
	I0409 00:49:19.836358       1 controller.go:615] quota admission added evaluator for: endpoints
	I0409 00:49:19.845328       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0409 00:49:20.514786       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0409 00:49:20.767803       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0409 00:49:20.802075       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0409 00:49:20.819308       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0409 00:49:25.865396       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0409 00:49:26.111721       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0409 00:53:35.167258       1 conn.go:339] Error on socket receive: read tcp 192.168.113.157:8443->192.168.112.1:55643: use of closed network connection
	E0409 00:53:35.661294       1 conn.go:339] Error on socket receive: read tcp 192.168.113.157:8443->192.168.112.1:55646: use of closed network connection
	E0409 00:53:36.204454       1 conn.go:339] Error on socket receive: read tcp 192.168.113.157:8443->192.168.112.1:55648: use of closed network connection
	E0409 00:53:36.735225       1 conn.go:339] Error on socket receive: read tcp 192.168.113.157:8443->192.168.112.1:55650: use of closed network connection
	E0409 00:53:37.228447       1 conn.go:339] Error on socket receive: read tcp 192.168.113.157:8443->192.168.112.1:55652: use of closed network connection
	E0409 00:53:37.714251       1 conn.go:339] Error on socket receive: read tcp 192.168.113.157:8443->192.168.112.1:55654: use of closed network connection
	E0409 00:53:38.633650       1 conn.go:339] Error on socket receive: read tcp 192.168.113.157:8443->192.168.112.1:55657: use of closed network connection
	E0409 00:53:49.109085       1 conn.go:339] Error on socket receive: read tcp 192.168.113.157:8443->192.168.112.1:55659: use of closed network connection
	E0409 00:53:49.617579       1 conn.go:339] Error on socket receive: read tcp 192.168.113.157:8443->192.168.112.1:55668: use of closed network connection
	E0409 00:54:00.115615       1 conn.go:339] Error on socket receive: read tcp 192.168.113.157:8443->192.168.112.1:55670: use of closed network connection
	
	
	==> kube-controller-manager [729d2794ba86] <==
	E0409 00:52:33.130570       1 range_allocator.go:433] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-611500-m02\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.2.0/24\", \"10.244.1.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-611500-m02" podCIDRs=["10.244.2.0/24"]
	E0409 00:52:33.130841       1 range_allocator.go:439] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-611500-m02\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.2.0/24\", \"10.244.1.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-611500-m02"
	E0409 00:52:33.130932       1 range_allocator.go:252] "Unhandled Error" err="error syncing 'multinode-611500-m02': failed to patch node CIDR: Node \"multinode-611500-m02\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.2.0/24\", \"10.244.1.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0409 00:52:33.131008       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-611500-m02"
	I0409 00:52:33.136498       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-611500-m02"
	I0409 00:52:33.539832       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-611500-m02"
	I0409 00:52:34.060801       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-611500-m02"
	I0409 00:52:35.106044       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-611500-m02"
	I0409 00:52:35.219077       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-611500-m02"
	I0409 00:52:43.395727       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-611500-m02"
	I0409 00:53:02.147650       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-611500-m02"
	I0409 00:53:02.147763       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-611500-m02"
	I0409 00:53:02.163477       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-611500-m02"
	I0409 00:53:03.993077       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-611500-m02"
	I0409 00:53:05.133805       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-611500-m02"
	I0409 00:53:28.264457       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="74.408326ms"
	I0409 00:53:28.279062       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="14.548842ms"
	I0409 00:53:28.279185       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="73.301µs"
	I0409 00:53:28.307004       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="84.101µs"
	I0409 00:53:31.608108       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="8.587414ms"
	I0409 00:53:31.608616       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="91.101µs"
	I0409 00:53:32.254530       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="7.790903ms"
	I0409 00:53:32.255178       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="26.801µs"
	I0409 00:53:34.715422       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-611500-m02"
	I0409 00:53:57.419101       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-611500"
	
	
	==> kube-proxy [1a9f657c2b5a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0409 00:49:28.039254       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0409 00:49:28.086921       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.113.157"]
	E0409 00:49:28.087603       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0409 00:49:28.163284       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0409 00:49:28.163425       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0409 00:49:28.163503       1 server_linux.go:170] "Using iptables Proxier"
	I0409 00:49:28.168549       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0409 00:49:28.170109       1 server.go:497] "Version info" version="v1.32.2"
	I0409 00:49:28.170208       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0409 00:49:28.177841       1 config.go:199] "Starting service config controller"
	I0409 00:49:28.177990       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0409 00:49:28.178013       1 config.go:105] "Starting endpoint slice config controller"
	I0409 00:49:28.178058       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0409 00:49:28.180425       1 config.go:329] "Starting node config controller"
	I0409 00:49:28.180604       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0409 00:49:28.278851       1 shared_informer.go:320] Caches are synced for service config
	I0409 00:49:28.278861       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0409 00:49:28.283571       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [8fec401b4d08] <==
	W0409 00:49:18.579466       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0409 00:49:18.580242       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0409 00:49:18.580170       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0409 00:49:18.582429       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0409 00:49:18.589582       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0409 00:49:18.589843       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0409 00:49:18.692182       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0409 00:49:18.692231       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0409 00:49:18.809191       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0409 00:49:18.809632       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0409 00:49:18.829593       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0409 00:49:18.829649       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0409 00:49:18.852706       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0409 00:49:18.852800       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0409 00:49:18.853226       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0409 00:49:18.853480       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0409 00:49:18.913033       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0409 00:49:18.913078       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0409 00:49:18.998014       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0409 00:49:18.998208       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0409 00:49:19.016126       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0409 00:49:19.016344       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0409 00:49:19.134507       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0409 00:49:19.134933       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0409 00:49:21.742091       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 09 00:50:20 multinode-611500 kubelet[2289]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 09 00:51:20 multinode-611500 kubelet[2289]: E0409 00:51:20.722774    2289 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 09 00:51:20 multinode-611500 kubelet[2289]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 09 00:51:20 multinode-611500 kubelet[2289]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 09 00:51:20 multinode-611500 kubelet[2289]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 09 00:51:20 multinode-611500 kubelet[2289]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 09 00:52:20 multinode-611500 kubelet[2289]: E0409 00:52:20.723330    2289 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 09 00:52:20 multinode-611500 kubelet[2289]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 09 00:52:20 multinode-611500 kubelet[2289]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 09 00:52:20 multinode-611500 kubelet[2289]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 09 00:52:20 multinode-611500 kubelet[2289]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 09 00:53:20 multinode-611500 kubelet[2289]: E0409 00:53:20.720474    2289 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 09 00:53:20 multinode-611500 kubelet[2289]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 09 00:53:20 multinode-611500 kubelet[2289]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 09 00:53:20 multinode-611500 kubelet[2289]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 09 00:53:20 multinode-611500 kubelet[2289]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 09 00:53:28 multinode-611500 kubelet[2289]: I0409 00:53:28.382211    2289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z692j\" (UniqueName: \"kubernetes.io/projected/2cd940b8-79aa-4c21-95f0-9ea66a73cd4a-kube-api-access-z692j\") pod \"busybox-58667487b6-q97dd\" (UID: \"2cd940b8-79aa-4c21-95f0-9ea66a73cd4a\") " pod="default/busybox-58667487b6-q97dd"
	Apr 09 00:53:29 multinode-611500 kubelet[2289]: I0409 00:53:29.055826    2289 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5dfc9645b5a9353ae53b32225ed4966d43eb5c4eb0fc876c4fe812a4cabb6a0"
	Apr 09 00:53:35 multinode-611500 kubelet[2289]: E0409 00:53:35.661813    2289 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:60390->127.0.0.1:40113: write tcp 127.0.0.1:60390->127.0.0.1:40113: write: broken pipe
	Apr 09 00:53:36 multinode-611500 kubelet[2289]: E0409 00:53:36.735351    2289 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:60392->127.0.0.1:40113: read tcp 127.0.0.1:60392->127.0.0.1:40113: read: connection reset by peer
	Apr 09 00:54:20 multinode-611500 kubelet[2289]: E0409 00:54:20.720965    2289 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 09 00:54:20 multinode-611500 kubelet[2289]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 09 00:54:20 multinode-611500 kubelet[2289]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 09 00:54:20 multinode-611500 kubelet[2289]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 09 00:54:20 multinode-611500 kubelet[2289]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-611500 -n multinode-611500
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-611500 -n multinode-611500: (11.9475624s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-611500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (58.02s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (432.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-611500
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-611500
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-611500: (1m39.4374472s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-611500 --wait=true -v=8 --alsologtostderr
E0409 01:13:10.539991    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0409 01:16:13.642042    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-611500 --wait=true -v=8 --alsologtostderr: exit status 1 (4m53.7938924s)

                                                
                                                
-- stdout --
	* [multinode-611500] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=20501
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "multinode-611500" primary control-plane node in "multinode-611500" cluster
	* Restarting existing hyperv VM for "multinode-611500" ...
	* Preparing Kubernetes v1.32.2 on Docker 27.4.0 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	
	* Starting "multinode-611500-m02" worker node in "multinode-611500" cluster
	* Restarting existing hyperv VM for "multinode-611500-m02" ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0409 01:11:24.044830    7488 out.go:345] Setting OutFile to fd 1980 ...
	I0409 01:11:24.130740    7488 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0409 01:11:24.130740    7488 out.go:358] Setting ErrFile to fd 1672...
	I0409 01:11:24.130740    7488 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0409 01:11:24.151836    7488 out.go:352] Setting JSON to false
	I0409 01:11:24.156000    7488 start.go:129] hostinfo: {"hostname":"minikube6","uptime":18081,"bootTime":1744143002,"procs":178,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5679 Build 19045.5679","kernelVersion":"10.0.19045.5679 Build 19045.5679","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0409 01:11:24.156000    7488 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0409 01:11:24.324550    7488 out.go:177] * [multinode-611500] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
	I0409 01:11:24.354536    7488 notify.go:220] Checking for updates...
	I0409 01:11:24.362841    7488 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0409 01:11:24.395036    7488 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0409 01:11:24.408614    7488 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0409 01:11:24.425062    7488 out.go:177]   - MINIKUBE_LOCATION=20501
	I0409 01:11:24.438855    7488 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0409 01:11:24.451306    7488 config.go:182] Loaded profile config "multinode-611500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0409 01:11:24.452017    7488 driver.go:404] Setting default libvirt URI to qemu:///system
	I0409 01:11:29.922334    7488 out.go:177] * Using the hyperv driver based on existing profile
	I0409 01:11:29.948325    7488 start.go:297] selected driver: hyperv
	I0409 01:11:29.948452    7488 start.go:901] validating driver "hyperv" against &{Name:multinode-611500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 Cluste
rName:multinode-611500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.113.157 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.113.143 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.116.185 Port:0 KubernetesVersion:v1.32.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0409 01:11:29.948663    7488 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0409 01:11:30.004917    7488 start_flags.go:975] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0409 01:11:30.005918    7488 cni.go:84] Creating CNI manager for ""
	I0409 01:11:30.005918    7488 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0409 01:11:30.005918    7488 start.go:340] cluster config:
	{Name:multinode-611500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:multinode-611500 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.113.157 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.113.143 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.116.185 Port:0 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:fa
lse kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0409 01:11:30.005918    7488 iso.go:125] acquiring lock: {Name:mk49322cc4182124f5e9cd1631076166a7ff463d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0409 01:11:30.136771    7488 out.go:177] * Starting "multinode-611500" primary control-plane node in "multinode-611500" cluster
	I0409 01:11:30.145142    7488 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0409 01:11:30.146093    7488 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
	I0409 01:11:30.146243    7488 cache.go:56] Caching tarball of preloaded images
	I0409 01:11:30.146570    7488 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0409 01:11:30.146570    7488 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0409 01:11:30.146570    7488 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\config.json ...
	I0409 01:11:30.149567    7488 start.go:360] acquireMachinesLock for multinode-611500: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0409 01:11:30.150214    7488 start.go:364] duration metric: took 544.2µs to acquireMachinesLock for "multinode-611500"
	I0409 01:11:30.150311    7488 start.go:96] Skipping create...Using existing machine configuration
	I0409 01:11:30.150311    7488 fix.go:54] fixHost starting: 
	I0409 01:11:30.151053    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:11:32.900054    7488 main.go:141] libmachine: [stdout =====>] : Off
	
	I0409 01:11:32.900054    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:11:32.900054    7488 fix.go:112] recreateIfNeeded on multinode-611500: state=Stopped err=<nil>
	W0409 01:11:32.900054    7488 fix.go:138] unexpected machine state, will restart: <nil>
	I0409 01:11:32.930172    7488 out.go:177] * Restarting existing hyperv VM for "multinode-611500" ...
	I0409 01:11:32.936482    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-611500
	I0409 01:11:35.982504    7488 main.go:141] libmachine: [stdout =====>] : 
	I0409 01:11:35.982976    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:11:35.982976    7488 main.go:141] libmachine: Waiting for host to start...
	I0409 01:11:35.982976    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:11:38.272777    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:11:38.272777    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:11:38.273894    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:11:40.819375    7488 main.go:141] libmachine: [stdout =====>] : 
	I0409 01:11:40.820147    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:11:41.820441    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:11:44.047000    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:11:44.047000    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:11:44.047000    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:11:46.572397    7488 main.go:141] libmachine: [stdout =====>] : 
	I0409 01:11:46.572397    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:11:47.573396    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:11:49.712609    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:11:49.713540    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:11:49.713856    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:11:52.188977    7488 main.go:141] libmachine: [stdout =====>] : 
	I0409 01:11:52.188977    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:11:53.190504    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:11:55.366168    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:11:55.367133    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:11:55.367133    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:11:57.859957    7488 main.go:141] libmachine: [stdout =====>] : 
	I0409 01:11:57.859957    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:11:58.860533    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:12:01.040095    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:12:01.040179    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:01.040179    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:12:03.572733    7488 main.go:141] libmachine: [stdout =====>] : 192.168.120.172
	
	I0409 01:12:03.572733    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:03.576789    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:12:05.657928    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:12:05.657972    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:05.658080    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:12:08.130573    7488 main.go:141] libmachine: [stdout =====>] : 192.168.120.172
	
	I0409 01:12:08.131079    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:08.131438    7488 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\config.json ...
	I0409 01:12:08.134499    7488 machine.go:93] provisionDockerMachine start ...
	I0409 01:12:08.134499    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:12:10.219873    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:12:10.220119    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:10.220254    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:12:12.707795    7488 main.go:141] libmachine: [stdout =====>] : 192.168.120.172
	
	I0409 01:12:12.707795    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:12.714004    7488 main.go:141] libmachine: Using SSH client type: native
	I0409 01:12:12.714158    7488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.120.172 22 <nil> <nil>}
	I0409 01:12:12.714778    7488 main.go:141] libmachine: About to run SSH command:
	hostname
	I0409 01:12:12.852142    7488 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0409 01:12:12.852233    7488 buildroot.go:166] provisioning hostname "multinode-611500"
	I0409 01:12:12.852321    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:12:14.927391    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:12:14.928151    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:14.928151    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:12:17.391452    7488 main.go:141] libmachine: [stdout =====>] : 192.168.120.172
	
	I0409 01:12:17.391452    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:17.399339    7488 main.go:141] libmachine: Using SSH client type: native
	I0409 01:12:17.399683    7488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.120.172 22 <nil> <nil>}
	I0409 01:12:17.399683    7488 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-611500 && echo "multinode-611500" | sudo tee /etc/hostname
	I0409 01:12:17.569282    7488 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-611500
	
	I0409 01:12:17.569412    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:12:19.659808    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:12:19.659808    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:19.660517    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:12:22.079274    7488 main.go:141] libmachine: [stdout =====>] : 192.168.120.172
	
	I0409 01:12:22.079274    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:22.085484    7488 main.go:141] libmachine: Using SSH client type: native
	I0409 01:12:22.085603    7488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.120.172 22 <nil> <nil>}
	I0409 01:12:22.086226    7488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-611500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-611500/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-611500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0409 01:12:22.238700    7488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0409 01:12:22.238834    7488 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0409 01:12:22.238949    7488 buildroot.go:174] setting up certificates
	I0409 01:12:22.238949    7488 provision.go:84] configureAuth start
	I0409 01:12:22.239046    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:12:24.286455    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:12:24.286455    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:24.286843    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:12:26.725682    7488 main.go:141] libmachine: [stdout =====>] : 192.168.120.172
	
	I0409 01:12:26.726409    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:26.726520    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:12:28.873228    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:12:28.873228    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:28.873798    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:12:31.373353    7488 main.go:141] libmachine: [stdout =====>] : 192.168.120.172
	
	I0409 01:12:31.373353    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:31.373913    7488 provision.go:143] copyHostCerts
	I0409 01:12:31.374115    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0409 01:12:31.374463    7488 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0409 01:12:31.374550    7488 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0409 01:12:31.375120    7488 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0409 01:12:31.376717    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0409 01:12:31.376932    7488 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0409 01:12:31.376932    7488 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0409 01:12:31.377469    7488 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0409 01:12:31.378774    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0409 01:12:31.378934    7488 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0409 01:12:31.378934    7488 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0409 01:12:31.378934    7488 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0409 01:12:31.380372    7488 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-611500 san=[127.0.0.1 192.168.120.172 localhost minikube multinode-611500]
	I0409 01:12:31.821702    7488 provision.go:177] copyRemoteCerts
	I0409 01:12:31.834522    7488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0409 01:12:31.834752    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:12:33.956514    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:12:33.956875    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:33.956875    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:12:36.445084    7488 main.go:141] libmachine: [stdout =====>] : 192.168.120.172
	
	I0409 01:12:36.445084    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:36.446048    7488 sshutil.go:53] new ssh client: &{IP:192.168.120.172 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-611500\id_rsa Username:docker}
	I0409 01:12:36.557082    7488 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7224408s)
	I0409 01:12:36.557137    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0409 01:12:36.557290    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0409 01:12:36.602221    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0409 01:12:36.602221    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0409 01:12:36.650714    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0409 01:12:36.651283    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0409 01:12:36.696514    7488 provision.go:87] duration metric: took 14.4572627s to configureAuth
	I0409 01:12:36.696577    7488 buildroot.go:189] setting minikube options for container-runtime
	I0409 01:12:36.697710    7488 config.go:182] Loaded profile config "multinode-611500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0409 01:12:36.697870    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:12:38.822850    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:12:38.823351    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:38.823351    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:12:41.275713    7488 main.go:141] libmachine: [stdout =====>] : 192.168.120.172
	
	I0409 01:12:41.275713    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:41.282250    7488 main.go:141] libmachine: Using SSH client type: native
	I0409 01:12:41.282528    7488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.120.172 22 <nil> <nil>}
	I0409 01:12:41.282528    7488 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0409 01:12:41.415451    7488 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0409 01:12:41.415451    7488 buildroot.go:70] root file system type: tmpfs
	I0409 01:12:41.415744    7488 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0409 01:12:41.415850    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:12:43.465288    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:12:43.465288    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:43.466018    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:12:45.951733    7488 main.go:141] libmachine: [stdout =====>] : 192.168.120.172
	
	I0409 01:12:45.951733    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:45.957735    7488 main.go:141] libmachine: Using SSH client type: native
	I0409 01:12:45.958266    7488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.120.172 22 <nil> <nil>}
	I0409 01:12:45.958565    7488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0409 01:12:46.127008    7488 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0409 01:12:46.127008    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:12:48.234237    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:12:48.234237    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:48.234664    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:12:50.717069    7488 main.go:141] libmachine: [stdout =====>] : 192.168.120.172
	
	I0409 01:12:50.717176    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:50.724829    7488 main.go:141] libmachine: Using SSH client type: native
	I0409 01:12:50.725610    7488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.120.172 22 <nil> <nil>}
	I0409 01:12:50.725610    7488 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0409 01:12:53.352381    7488 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0409 01:12:53.352381    7488 machine.go:96] duration metric: took 45.2173037s to provisionDockerMachine
	I0409 01:12:53.352381    7488 start.go:293] postStartSetup for "multinode-611500" (driver="hyperv")
	I0409 01:12:53.352381    7488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0409 01:12:53.365715    7488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0409 01:12:53.365715    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:12:55.543657    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:12:55.543731    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:55.543903    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:12:58.014968    7488 main.go:141] libmachine: [stdout =====>] : 192.168.120.172
	
	I0409 01:12:58.014968    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:58.015749    7488 sshutil.go:53] new ssh client: &{IP:192.168.120.172 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-611500\id_rsa Username:docker}
	I0409 01:12:58.132179    7488 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7662987s)
	I0409 01:12:58.147237    7488 ssh_runner.go:195] Run: cat /etc/os-release
	I0409 01:12:58.158008    7488 command_runner.go:130] > NAME=Buildroot
	I0409 01:12:58.158008    7488 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0409 01:12:58.158008    7488 command_runner.go:130] > ID=buildroot
	I0409 01:12:58.158008    7488 command_runner.go:130] > VERSION_ID=2023.02.9
	I0409 01:12:58.158008    7488 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0409 01:12:58.158008    7488 info.go:137] Remote host: Buildroot 2023.02.9
	I0409 01:12:58.158008    7488 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0409 01:12:58.158008    7488 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0409 01:12:58.159040    7488 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> 98642.pem in /etc/ssl/certs
	I0409 01:12:58.159040    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> /etc/ssl/certs/98642.pem
	I0409 01:12:58.173318    7488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0409 01:12:58.196642    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem --> /etc/ssl/certs/98642.pem (1708 bytes)
	I0409 01:12:58.242208    7488 start.go:296] duration metric: took 4.889764s for postStartSetup
	I0409 01:12:58.242334    7488 fix.go:56] duration metric: took 1m28.0908954s for fixHost
	I0409 01:12:58.242334    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:13:00.371136    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:13:00.372044    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:13:00.372350    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:13:02.917808    7488 main.go:141] libmachine: [stdout =====>] : 192.168.120.172
	
	I0409 01:13:02.918446    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:13:02.924303    7488 main.go:141] libmachine: Using SSH client type: native
	I0409 01:13:02.924447    7488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.120.172 22 <nil> <nil>}
	I0409 01:13:02.924447    7488 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0409 01:13:03.055465    7488 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744161183.075461872
	
	I0409 01:13:03.055598    7488 fix.go:216] guest clock: 1744161183.075461872
	I0409 01:13:03.055598    7488 fix.go:229] Guest: 2025-04-09 01:13:03.075461872 +0000 UTC Remote: 2025-04-09 01:12:58.242334 +0000 UTC m=+94.294803901 (delta=4.833127872s)
	I0409 01:13:03.055750    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:13:05.187244    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:13:05.187244    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:13:05.187834    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:13:07.706514    7488 main.go:141] libmachine: [stdout =====>] : 192.168.120.172
	
	I0409 01:13:07.706786    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:13:07.712868    7488 main.go:141] libmachine: Using SSH client type: native
	I0409 01:13:07.712868    7488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.120.172 22 <nil> <nil>}
	I0409 01:13:07.712868    7488 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1744161183
	I0409 01:13:07.856863    7488 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Apr  9 01:13:03 UTC 2025
	
	I0409 01:13:07.856863    7488 fix.go:236] clock set: Wed Apr  9 01:13:03 UTC 2025
	 (err=<nil>)
	I0409 01:13:07.856863    7488 start.go:83] releasing machines lock for "multinode-611500", held for 1m37.7053676s
	I0409 01:13:07.857474    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:13:09.970430    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:13:09.970430    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:13:09.971541    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:13:12.498657    7488 main.go:141] libmachine: [stdout =====>] : 192.168.120.172
	
	I0409 01:13:12.498657    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:13:12.503585    7488 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0409 01:13:12.503735    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:13:12.513069    7488 ssh_runner.go:195] Run: cat /version.json
	I0409 01:13:12.513069    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:13:14.726777    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:13:14.726963    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:13:14.726963    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:13:14.727044    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:13:14.727044    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:13:14.727044    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:13:17.335662    7488 main.go:141] libmachine: [stdout =====>] : 192.168.120.172
	
	I0409 01:13:17.336313    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:13:17.336313    7488 sshutil.go:53] new ssh client: &{IP:192.168.120.172 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-611500\id_rsa Username:docker}
	I0409 01:13:17.365635    7488 main.go:141] libmachine: [stdout =====>] : 192.168.120.172
	
	I0409 01:13:17.365635    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:13:17.366275    7488 sshutil.go:53] new ssh client: &{IP:192.168.120.172 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-611500\id_rsa Username:docker}
	I0409 01:13:17.430552    7488 command_runner.go:130] > {"iso_version": "v1.35.0", "kicbase_version": "v0.0.45-1736763277-20236", "minikube_version": "v1.35.0", "commit": "3fb24bd87c8c8761e2515e1a9ee13835a389ed68"}
	I0409 01:13:17.430728    7488 ssh_runner.go:235] Completed: cat /version.json: (4.9175958s)
	I0409 01:13:17.442251    7488 ssh_runner.go:195] Run: systemctl --version
	I0409 01:13:17.446195    7488 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0409 01:13:17.447450    7488 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9436641s)
	W0409 01:13:17.447450    7488 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0409 01:13:17.455280    7488 command_runner.go:130] > systemd 252 (252)
	I0409 01:13:17.455280    7488 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0409 01:13:17.467523    7488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0409 01:13:17.475405    7488 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0409 01:13:17.476493    7488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0409 01:13:17.485811    7488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0409 01:13:17.516740    7488 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0409 01:13:17.516740    7488 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0409 01:13:17.516740    7488 start.go:495] detecting cgroup driver to use...
	I0409 01:13:17.516740    7488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0409 01:13:17.548230    7488 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	W0409 01:13:17.560986    7488 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0409 01:13:17.560986    7488 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0409 01:13:17.562333    7488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0409 01:13:17.591510    7488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0409 01:13:17.610371    7488 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0409 01:13:17.621732    7488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0409 01:13:17.650746    7488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0409 01:13:17.681949    7488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0409 01:13:17.710530    7488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0409 01:13:17.741508    7488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0409 01:13:17.770114    7488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0409 01:13:17.802673    7488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0409 01:13:17.833932    7488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0409 01:13:17.864420    7488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0409 01:13:17.881103    7488 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0409 01:13:17.881361    7488 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0409 01:13:17.893007    7488 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0409 01:13:17.929138    7488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0409 01:13:17.955430    7488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0409 01:13:18.138081    7488 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0409 01:13:18.167441    7488 start.go:495] detecting cgroup driver to use...
	I0409 01:13:18.177442    7488 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0409 01:13:18.200777    7488 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0409 01:13:18.200777    7488 command_runner.go:130] > [Unit]
	I0409 01:13:18.200777    7488 command_runner.go:130] > Description=Docker Application Container Engine
	I0409 01:13:18.200777    7488 command_runner.go:130] > Documentation=https://docs.docker.com
	I0409 01:13:18.200777    7488 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0409 01:13:18.200777    7488 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0409 01:13:18.200963    7488 command_runner.go:130] > StartLimitBurst=3
	I0409 01:13:18.201002    7488 command_runner.go:130] > StartLimitIntervalSec=60
	I0409 01:13:18.201002    7488 command_runner.go:130] > [Service]
	I0409 01:13:18.201002    7488 command_runner.go:130] > Type=notify
	I0409 01:13:18.201002    7488 command_runner.go:130] > Restart=on-failure
	I0409 01:13:18.201049    7488 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0409 01:13:18.201049    7488 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0409 01:13:18.201083    7488 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0409 01:13:18.201083    7488 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0409 01:13:18.201083    7488 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0409 01:13:18.201133    7488 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0409 01:13:18.201133    7488 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0409 01:13:18.201174    7488 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0409 01:13:18.201219    7488 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0409 01:13:18.201260    7488 command_runner.go:130] > ExecStart=
	I0409 01:13:18.201411    7488 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0409 01:13:18.201471    7488 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0409 01:13:18.201471    7488 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0409 01:13:18.201502    7488 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0409 01:13:18.201502    7488 command_runner.go:130] > LimitNOFILE=infinity
	I0409 01:13:18.201557    7488 command_runner.go:130] > LimitNPROC=infinity
	I0409 01:13:18.201557    7488 command_runner.go:130] > LimitCORE=infinity
	I0409 01:13:18.201557    7488 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0409 01:13:18.201598    7488 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0409 01:13:18.201598    7488 command_runner.go:130] > TasksMax=infinity
	I0409 01:13:18.201598    7488 command_runner.go:130] > TimeoutStartSec=0
	I0409 01:13:18.201644    7488 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0409 01:13:18.201644    7488 command_runner.go:130] > Delegate=yes
	I0409 01:13:18.201644    7488 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0409 01:13:18.201644    7488 command_runner.go:130] > KillMode=process
	I0409 01:13:18.201684    7488 command_runner.go:130] > [Install]
	I0409 01:13:18.201684    7488 command_runner.go:130] > WantedBy=multi-user.target
	I0409 01:13:18.213302    7488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0409 01:13:18.245337    7488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0409 01:13:18.294101    7488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0409 01:13:18.326585    7488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0409 01:13:18.379052    7488 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0409 01:13:18.448069    7488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0409 01:13:18.475123    7488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0409 01:13:18.509575    7488 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0409 01:13:18.520150    7488 ssh_runner.go:195] Run: which cri-dockerd
	I0409 01:13:18.526211    7488 command_runner.go:130] > /usr/bin/cri-dockerd
	I0409 01:13:18.538927    7488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0409 01:13:18.556154    7488 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0409 01:13:18.605691    7488 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0409 01:13:18.804543    7488 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0409 01:13:18.979273    7488 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0409 01:13:18.979273    7488 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0409 01:13:19.028804    7488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0409 01:13:19.215662    7488 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0409 01:13:21.915269    7488 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6995716s)
	I0409 01:13:21.926704    7488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0409 01:13:21.964157    7488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0409 01:13:21.999196    7488 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0409 01:13:22.203016    7488 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0409 01:13:22.387131    7488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0409 01:13:22.584835    7488 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0409 01:13:22.623645    7488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0409 01:13:22.654650    7488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0409 01:13:22.857009    7488 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0409 01:13:22.964438    7488 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0409 01:13:22.975931    7488 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0409 01:13:22.985074    7488 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0409 01:13:22.985074    7488 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0409 01:13:22.985074    7488 command_runner.go:130] > Device: 0,22	Inode: 842         Links: 1
	I0409 01:13:22.985074    7488 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0409 01:13:22.985074    7488 command_runner.go:130] > Access: 2025-04-09 01:13:22.900320156 +0000
	I0409 01:13:22.985289    7488 command_runner.go:130] > Modify: 2025-04-09 01:13:22.900320156 +0000
	I0409 01:13:22.985338    7488 command_runner.go:130] > Change: 2025-04-09 01:13:22.904320186 +0000
	I0409 01:13:22.985338    7488 command_runner.go:130] >  Birth: -
	I0409 01:13:22.985446    7488 start.go:563] Will wait 60s for crictl version
	I0409 01:13:22.995543    7488 ssh_runner.go:195] Run: which crictl
	I0409 01:13:23.001057    7488 command_runner.go:130] > /usr/bin/crictl
	I0409 01:13:23.012641    7488 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0409 01:13:23.060711    7488 command_runner.go:130] > Version:  0.1.0
	I0409 01:13:23.060711    7488 command_runner.go:130] > RuntimeName:  docker
	I0409 01:13:23.060711    7488 command_runner.go:130] > RuntimeVersion:  27.4.0
	I0409 01:13:23.060711    7488 command_runner.go:130] > RuntimeApiVersion:  v1
	I0409 01:13:23.060711    7488 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0409 01:13:23.070324    7488 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0409 01:13:23.101284    7488 command_runner.go:130] > 27.4.0
	I0409 01:13:23.110132    7488 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0409 01:13:23.147143    7488 command_runner.go:130] > 27.4.0
	I0409 01:13:23.153437    7488 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 27.4.0 ...
	I0409 01:13:23.153437    7488 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0409 01:13:23.157802    7488 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0409 01:13:23.157802    7488 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0409 01:13:23.157802    7488 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0409 01:13:23.157802    7488 ip.go:211] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:f4:da:75 Flags:up|broadcast|multicast|running}
	I0409 01:13:23.161691    7488 ip.go:214] interface addr: fe80::e8ab:9cc6:22b1:a5fc/64
	I0409 01:13:23.161835    7488 ip.go:214] interface addr: 192.168.112.1/20
	I0409 01:13:23.172408    7488 ssh_runner.go:195] Run: grep 192.168.112.1	host.minikube.internal$ /etc/hosts
	I0409 01:13:23.178060    7488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.112.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0409 01:13:23.203114    7488 kubeadm.go:883] updating cluster {Name:multinode-611500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:multinode-6
11500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.120.172 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.113.143 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.116.185 Port:0 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false ins
pektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0409 01:13:23.203114    7488 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0409 01:13:23.214847    7488 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0409 01:13:23.241950    7488 command_runner.go:130] > kindest/kindnetd:v20250214-acbabc1a
	I0409 01:13:23.241950    7488 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.32.2
	I0409 01:13:23.241950    7488 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.32.2
	I0409 01:13:23.241950    7488 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.32.2
	I0409 01:13:23.241950    7488 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.32.2
	I0409 01:13:23.241950    7488 command_runner.go:130] > registry.k8s.io/etcd:3.5.16-0
	I0409 01:13:23.241950    7488 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0409 01:13:23.241950    7488 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0409 01:13:23.241950    7488 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0409 01:13:23.241950    7488 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0409 01:13:23.241950    7488 docker.go:689] Got preloaded images: -- stdout --
	kindest/kindnetd:v20250214-acbabc1a
	registry.k8s.io/kube-apiserver:v1.32.2
	registry.k8s.io/kube-scheduler:v1.32.2
	registry.k8s.io/kube-proxy:v1.32.2
	registry.k8s.io/kube-controller-manager:v1.32.2
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0409 01:13:23.241950    7488 docker.go:619] Images already preloaded, skipping extraction
	I0409 01:13:23.252482    7488 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0409 01:13:23.278026    7488 command_runner.go:130] > kindest/kindnetd:v20250214-acbabc1a
	I0409 01:13:23.278026    7488 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.32.2
	I0409 01:13:23.278026    7488 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.32.2
	I0409 01:13:23.278026    7488 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.32.2
	I0409 01:13:23.278026    7488 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.32.2
	I0409 01:13:23.278026    7488 command_runner.go:130] > registry.k8s.io/etcd:3.5.16-0
	I0409 01:13:23.278026    7488 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0409 01:13:23.278026    7488 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0409 01:13:23.278026    7488 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0409 01:13:23.278026    7488 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0409 01:13:23.278026    7488 docker.go:689] Got preloaded images: -- stdout --
	kindest/kindnetd:v20250214-acbabc1a
	registry.k8s.io/kube-apiserver:v1.32.2
	registry.k8s.io/kube-proxy:v1.32.2
	registry.k8s.io/kube-controller-manager:v1.32.2
	registry.k8s.io/kube-scheduler:v1.32.2
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0409 01:13:23.278026    7488 cache_images.go:84] Images are preloaded, skipping loading
	I0409 01:13:23.278026    7488 kubeadm.go:934] updating node { 192.168.120.172 8443 v1.32.2 docker true true} ...
	I0409 01:13:23.278629    7488 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-611500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.120.172
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:multinode-611500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0409 01:13:23.289105    7488 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0409 01:13:23.350732    7488 command_runner.go:130] > cgroupfs
	I0409 01:13:23.350907    7488 cni.go:84] Creating CNI manager for ""
	I0409 01:13:23.350996    7488 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0409 01:13:23.351063    7488 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0409 01:13:23.351170    7488 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.120.172 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-611500 NodeName:multinode-611500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.120.172"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.120.172 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0409 01:13:23.351467    7488 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.120.172
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-611500"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.120.172"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.120.172"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0409 01:13:23.363388    7488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0409 01:13:23.380948    7488 command_runner.go:130] > kubeadm
	I0409 01:13:23.380948    7488 command_runner.go:130] > kubectl
	I0409 01:13:23.380948    7488 command_runner.go:130] > kubelet
	I0409 01:13:23.380948    7488 binaries.go:44] Found k8s binaries, skipping transfer
	I0409 01:13:23.390929    7488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0409 01:13:23.406058    7488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0409 01:13:23.435463    7488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0409 01:13:23.462952    7488 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2303 bytes)
	I0409 01:13:23.504629    7488 ssh_runner.go:195] Run: grep 192.168.120.172	control-plane.minikube.internal$ /etc/hosts
	I0409 01:13:23.511090    7488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.120.172	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0409 01:13:23.547217    7488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0409 01:13:23.724250    7488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0409 01:13:23.753999    7488 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500 for IP: 192.168.120.172
	I0409 01:13:23.754125    7488 certs.go:194] generating shared ca certs ...
	I0409 01:13:23.754217    7488 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 01:13:23.754566    7488 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0409 01:13:23.755228    7488 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0409 01:13:23.755228    7488 certs.go:256] generating profile certs ...
	I0409 01:13:23.756710    7488 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\client.key
	I0409 01:13:23.756710    7488 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.key.70495b6d
	I0409 01:13:23.756710    7488 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.crt.70495b6d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.120.172]
	I0409 01:13:23.873720    7488 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.crt.70495b6d ...
	I0409 01:13:23.873720    7488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.crt.70495b6d: {Name:mk1f0b0fb179e64b9d993ea458f993460d72ba51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 01:13:23.875143    7488 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.key.70495b6d ...
	I0409 01:13:23.875143    7488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.key.70495b6d: {Name:mk56ffa6364a87645628d6f8b747da00a5a3e3e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 01:13:23.876159    7488 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.crt.70495b6d -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.crt
	I0409 01:13:23.891858    7488 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.key.70495b6d -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.key
	I0409 01:13:23.893466    7488 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\proxy-client.key
	I0409 01:13:23.893466    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0409 01:13:23.893611    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0409 01:13:23.893851    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0409 01:13:23.894032    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0409 01:13:23.894092    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0409 01:13:23.894092    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0409 01:13:23.894839    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0409 01:13:23.895160    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0409 01:13:23.895477    7488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9864.pem (1338 bytes)
	W0409 01:13:23.895477    7488 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9864_empty.pem, impossibly tiny 0 bytes
	I0409 01:13:23.896020    7488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0409 01:13:23.896175    7488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0409 01:13:23.896175    7488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0409 01:13:23.905841    7488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0409 01:13:23.906767    7488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem (1708 bytes)
	I0409 01:13:23.907162    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> /usr/share/ca-certificates/98642.pem
	I0409 01:13:23.907374    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0409 01:13:23.907374    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9864.pem -> /usr/share/ca-certificates/9864.pem
	I0409 01:13:23.908798    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0409 01:13:23.963556    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0409 01:13:24.009290    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0409 01:13:24.069626    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0409 01:13:24.115539    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0409 01:13:24.162954    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0409 01:13:24.208550    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0409 01:13:24.255232    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0409 01:13:24.300410    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem --> /usr/share/ca-certificates/98642.pem (1708 bytes)
	I0409 01:13:24.346151    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0409 01:13:24.390876    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9864.pem --> /usr/share/ca-certificates/9864.pem (1338 bytes)
	I0409 01:13:24.438287    7488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0409 01:13:24.482063    7488 ssh_runner.go:195] Run: openssl version
	I0409 01:13:24.488753    7488 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0409 01:13:24.497752    7488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9864.pem && ln -fs /usr/share/ca-certificates/9864.pem /etc/ssl/certs/9864.pem"
	I0409 01:13:24.528067    7488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9864.pem
	I0409 01:13:24.535031    7488 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr  8 23:04 /usr/share/ca-certificates/9864.pem
	I0409 01:13:24.535119    7488 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 23:04 /usr/share/ca-certificates/9864.pem
	I0409 01:13:24.546279    7488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9864.pem
	I0409 01:13:24.554340    7488 command_runner.go:130] > 51391683
	I0409 01:13:24.565665    7488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9864.pem /etc/ssl/certs/51391683.0"
	I0409 01:13:24.594717    7488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98642.pem && ln -fs /usr/share/ca-certificates/98642.pem /etc/ssl/certs/98642.pem"
	I0409 01:13:24.624528    7488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98642.pem
	I0409 01:13:24.631397    7488 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr  8 23:04 /usr/share/ca-certificates/98642.pem
	I0409 01:13:24.631397    7488 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 23:04 /usr/share/ca-certificates/98642.pem
	I0409 01:13:24.643699    7488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98642.pem
	I0409 01:13:24.651714    7488 command_runner.go:130] > 3ec20f2e
	I0409 01:13:24.666302    7488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/98642.pem /etc/ssl/certs/3ec20f2e.0"
	I0409 01:13:24.695383    7488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0409 01:13:24.726818    7488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0409 01:13:24.735662    7488 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr  8 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0409 01:13:24.735662    7488 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0409 01:13:24.747336    7488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0409 01:13:24.755788    7488 command_runner.go:130] > b5213941
	I0409 01:13:24.768257    7488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0409 01:13:24.799528    7488 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0409 01:13:24.807326    7488 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0409 01:13:24.807326    7488 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0409 01:13:24.807415    7488 command_runner.go:130] > Device: 8,1	Inode: 5242721     Links: 1
	I0409 01:13:24.807415    7488 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0409 01:13:24.807415    7488 command_runner.go:130] > Access: 2025-04-09 00:49:09.242960801 +0000
	I0409 01:13:24.807415    7488 command_runner.go:130] > Modify: 2025-04-09 00:49:09.242960801 +0000
	I0409 01:13:24.807457    7488 command_runner.go:130] > Change: 2025-04-09 00:49:09.242960801 +0000
	I0409 01:13:24.807457    7488 command_runner.go:130] >  Birth: 2025-04-09 00:49:09.242960801 +0000
	I0409 01:13:24.818924    7488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0409 01:13:24.826916    7488 command_runner.go:130] > Certificate will not expire
	I0409 01:13:24.837905    7488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0409 01:13:24.846354    7488 command_runner.go:130] > Certificate will not expire
	I0409 01:13:24.858754    7488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0409 01:13:24.868312    7488 command_runner.go:130] > Certificate will not expire
	I0409 01:13:24.881994    7488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0409 01:13:24.891299    7488 command_runner.go:130] > Certificate will not expire
	I0409 01:13:24.902666    7488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0409 01:13:24.911985    7488 command_runner.go:130] > Certificate will not expire
	I0409 01:13:24.923638    7488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0409 01:13:24.932761    7488 command_runner.go:130] > Certificate will not expire
	I0409 01:13:24.932761    7488 kubeadm.go:392] StartCluster: {Name:multinode-611500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:multinode-6115
00 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.120.172 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.113.143 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.116.185 Port:0 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspek
tor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0409 01:13:24.942035    7488 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0409 01:13:24.979620    7488 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0409 01:13:25.000771    7488 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0409 01:13:25.000771    7488 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0409 01:13:25.000771    7488 command_runner.go:130] > /var/lib/minikube/etcd:
	I0409 01:13:25.000771    7488 command_runner.go:130] > member
	I0409 01:13:25.000771    7488 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0409 01:13:25.000771    7488 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0409 01:13:25.012426    7488 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0409 01:13:25.037358    7488 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0409 01:13:25.039189    7488 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-611500" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0409 01:13:25.040108    7488 kubeconfig.go:62] C:\Users\jenkins.minikube6\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-611500" cluster setting kubeconfig missing "multinode-611500" context setting]
	I0409 01:13:25.040777    7488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 01:13:25.059735    7488 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0409 01:13:25.060293    7488 kapi.go:59] client config for multinode-611500: &rest.Config{Host:"https://192.168.120.172:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-611500/client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-611500/client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2809400), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0409 01:13:25.062008    7488 cert_rotation.go:140] Starting client certificate rotation controller
	I0409 01:13:25.062008    7488 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0409 01:13:25.062008    7488 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0409 01:13:25.062008    7488 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0409 01:13:25.062008    7488 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0409 01:13:25.072273    7488 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0409 01:13:25.089753    7488 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0409 01:13:25.089827    7488 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0409 01:13:25.089827    7488 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0409 01:13:25.089827    7488 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta4
	I0409 01:13:25.089877    7488 command_runner.go:130] >  kind: InitConfiguration
	I0409 01:13:25.089877    7488 command_runner.go:130] >  localAPIEndpoint:
	I0409 01:13:25.089877    7488 command_runner.go:130] > -  advertiseAddress: 192.168.113.157
	I0409 01:13:25.089877    7488 command_runner.go:130] > +  advertiseAddress: 192.168.120.172
	I0409 01:13:25.089877    7488 command_runner.go:130] >    bindPort: 8443
	I0409 01:13:25.089948    7488 command_runner.go:130] >  bootstrapTokens:
	I0409 01:13:25.089948    7488 command_runner.go:130] >    - groups:
	I0409 01:13:25.089948    7488 command_runner.go:130] > @@ -15,13 +15,13 @@
	I0409 01:13:25.089948    7488 command_runner.go:130] >    name: "multinode-611500"
	I0409 01:13:25.089948    7488 command_runner.go:130] >    kubeletExtraArgs:
	I0409 01:13:25.089948    7488 command_runner.go:130] >      - name: "node-ip"
	I0409 01:13:25.089948    7488 command_runner.go:130] > -      value: "192.168.113.157"
	I0409 01:13:25.089948    7488 command_runner.go:130] > +      value: "192.168.120.172"
	I0409 01:13:25.090133    7488 command_runner.go:130] >    taints: []
	I0409 01:13:25.090133    7488 command_runner.go:130] >  ---
	I0409 01:13:25.090133    7488 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta4
	I0409 01:13:25.090133    7488 command_runner.go:130] >  kind: ClusterConfiguration
	I0409 01:13:25.090133    7488 command_runner.go:130] >  apiServer:
	I0409 01:13:25.090133    7488 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "192.168.113.157"]
	I0409 01:13:25.090133    7488 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "192.168.120.172"]
	I0409 01:13:25.090133    7488 command_runner.go:130] >    extraArgs:
	I0409 01:13:25.090133    7488 command_runner.go:130] >      - name: "enable-admission-plugins"
	I0409 01:13:25.090259    7488 command_runner.go:130] >        value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0409 01:13:25.090347    7488 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 192.168.113.157
	+  advertiseAddress: 192.168.120.172
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -15,13 +15,13 @@
	   name: "multinode-611500"
	   kubeletExtraArgs:
	     - name: "node-ip"
	-      value: "192.168.113.157"
	+      value: "192.168.120.172"
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "192.168.113.157"]
	+  certSANs: ["127.0.0.1", "localhost", "192.168.120.172"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	       value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	
	-- /stdout --
	I0409 01:13:25.090386    7488 kubeadm.go:1160] stopping kube-system containers ...
	I0409 01:13:25.099340    7488 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0409 01:13:25.131365    7488 command_runner.go:130] > 934a19227ceb
	I0409 01:13:25.131557    7488 command_runner.go:130] > 81bdf2c1b915
	I0409 01:13:25.131557    7488 command_runner.go:130] > 5709459d3357
	I0409 01:13:25.131557    7488 command_runner.go:130] > 38b71116bee4
	I0409 01:13:25.131557    7488 command_runner.go:130] > 14703ff53a0b
	I0409 01:13:25.131557    7488 command_runner.go:130] > 1a9f657c2b5a
	I0409 01:13:25.131557    7488 command_runner.go:130] > 40c7183a37ea
	I0409 01:13:25.131557    7488 command_runner.go:130] > 0a2ad19ce50f
	I0409 01:13:25.131557    7488 command_runner.go:130] > 8fec401b4d08
	I0409 01:13:25.131557    7488 command_runner.go:130] > 45eca668cef5
	I0409 01:13:25.131557    7488 command_runner.go:130] > 729d2794ba86
	I0409 01:13:25.131557    7488 command_runner.go:130] > 9698a4747b5a
	I0409 01:13:25.131557    7488 command_runner.go:130] > 77b1d88aa162
	I0409 01:13:25.131557    7488 command_runner.go:130] > ac3e2538b3ca
	I0409 01:13:25.131557    7488 command_runner.go:130] > c41f8955903a
	I0409 01:13:25.131557    7488 command_runner.go:130] > bc594b9349b9
	I0409 01:13:25.131557    7488 docker.go:483] Stopping containers: [934a19227ceb 81bdf2c1b915 5709459d3357 38b71116bee4 14703ff53a0b 1a9f657c2b5a 40c7183a37ea 0a2ad19ce50f 8fec401b4d08 45eca668cef5 729d2794ba86 9698a4747b5a 77b1d88aa162 ac3e2538b3ca c41f8955903a bc594b9349b9]
	I0409 01:13:25.141187    7488 ssh_runner.go:195] Run: docker stop 934a19227ceb 81bdf2c1b915 5709459d3357 38b71116bee4 14703ff53a0b 1a9f657c2b5a 40c7183a37ea 0a2ad19ce50f 8fec401b4d08 45eca668cef5 729d2794ba86 9698a4747b5a 77b1d88aa162 ac3e2538b3ca c41f8955903a bc594b9349b9
	I0409 01:13:25.166886    7488 command_runner.go:130] > 934a19227ceb
	I0409 01:13:25.166886    7488 command_runner.go:130] > 81bdf2c1b915
	I0409 01:13:25.166886    7488 command_runner.go:130] > 5709459d3357
	I0409 01:13:25.166886    7488 command_runner.go:130] > 38b71116bee4
	I0409 01:13:25.167004    7488 command_runner.go:130] > 14703ff53a0b
	I0409 01:13:25.167004    7488 command_runner.go:130] > 1a9f657c2b5a
	I0409 01:13:25.167004    7488 command_runner.go:130] > 40c7183a37ea
	I0409 01:13:25.167004    7488 command_runner.go:130] > 0a2ad19ce50f
	I0409 01:13:25.167004    7488 command_runner.go:130] > 8fec401b4d08
	I0409 01:13:25.167004    7488 command_runner.go:130] > 45eca668cef5
	I0409 01:13:25.167004    7488 command_runner.go:130] > 729d2794ba86
	I0409 01:13:25.167004    7488 command_runner.go:130] > 9698a4747b5a
	I0409 01:13:25.167004    7488 command_runner.go:130] > 77b1d88aa162
	I0409 01:13:25.167004    7488 command_runner.go:130] > ac3e2538b3ca
	I0409 01:13:25.167108    7488 command_runner.go:130] > c41f8955903a
	I0409 01:13:25.167108    7488 command_runner.go:130] > bc594b9349b9
	I0409 01:13:25.178188    7488 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0409 01:13:25.218391    7488 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0409 01:13:25.237526    7488 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0409 01:13:25.237526    7488 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0409 01:13:25.237526    7488 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0409 01:13:25.237526    7488 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0409 01:13:25.238661    7488 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0409 01:13:25.238661    7488 kubeadm.go:157] found existing configuration files:
	
	I0409 01:13:25.250436    7488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0409 01:13:25.274293    7488 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0409 01:13:25.276177    7488 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0409 01:13:25.287842    7488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0409 01:13:25.318654    7488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0409 01:13:25.333664    7488 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0409 01:13:25.333664    7488 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0409 01:13:25.343598    7488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0409 01:13:25.371140    7488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0409 01:13:25.387373    7488 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0409 01:13:25.388339    7488 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0409 01:13:25.400052    7488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0409 01:13:25.426641    7488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0409 01:13:25.442513    7488 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0409 01:13:25.442513    7488 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0409 01:13:25.453854    7488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0409 01:13:25.484032    7488 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0409 01:13:25.503513    7488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0409 01:13:25.827655    7488 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0409 01:13:25.827655    7488 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0409 01:13:25.827655    7488 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0409 01:13:25.827655    7488 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0409 01:13:25.827811    7488 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0409 01:13:25.827811    7488 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0409 01:13:25.827811    7488 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0409 01:13:25.827811    7488 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0409 01:13:25.827811    7488 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0409 01:13:25.827811    7488 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0409 01:13:25.827811    7488 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0409 01:13:25.827895    7488 command_runner.go:130] > [certs] Using the existing "sa" key
	I0409 01:13:25.827933    7488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0409 01:13:26.612544    7488 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0409 01:13:26.612544    7488 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0409 01:13:26.612544    7488 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0409 01:13:26.612544    7488 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0409 01:13:26.612544    7488 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0409 01:13:26.612544    7488 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0409 01:13:26.612544    7488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0409 01:13:26.940998    7488 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0409 01:13:26.941045    7488 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0409 01:13:26.941076    7488 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0409 01:13:26.941128    7488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0409 01:13:27.023729    7488 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0409 01:13:27.024531    7488 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0409 01:13:27.024531    7488 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0409 01:13:27.024531    7488 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0409 01:13:27.024575    7488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0409 01:13:27.114628    7488 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0409 01:13:27.114756    7488 api_server.go:52] waiting for apiserver process to appear ...
	I0409 01:13:27.126255    7488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0409 01:13:27.626398    7488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0409 01:13:28.125633    7488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0409 01:13:28.627686    7488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0409 01:13:29.126975    7488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0409 01:13:29.151239    7488 command_runner.go:130] > 1936
	I0409 01:13:29.151336    7488 api_server.go:72] duration metric: took 2.0366299s to wait for apiserver process to appear ...
	I0409 01:13:29.151336    7488 api_server.go:88] waiting for apiserver healthz status ...
	I0409 01:13:29.151438    7488 api_server.go:253] Checking apiserver healthz at https://192.168.120.172:8443/healthz ...
	I0409 01:13:34.152209    7488 api_server.go:269] stopped: https://192.168.120.172:8443/healthz: Get "https://192.168.120.172:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0409 01:13:34.152209    7488 api_server.go:253] Checking apiserver healthz at https://192.168.120.172:8443/healthz ...
	I0409 01:13:39.153458    7488 api_server.go:269] stopped: https://192.168.120.172:8443/healthz: Get "https://192.168.120.172:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0409 01:13:39.153458    7488 api_server.go:253] Checking apiserver healthz at https://192.168.120.172:8443/healthz ...
	I0409 01:13:44.153934    7488 api_server.go:269] stopped: https://192.168.120.172:8443/healthz: Get "https://192.168.120.172:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0409 01:13:44.153934    7488 api_server.go:253] Checking apiserver healthz at https://192.168.120.172:8443/healthz ...
	I0409 01:13:49.155160    7488 api_server.go:269] stopped: https://192.168.120.172:8443/healthz: Get "https://192.168.120.172:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0409 01:13:49.155160    7488 api_server.go:253] Checking apiserver healthz at https://192.168.120.172:8443/healthz ...
	I0409 01:13:50.194214    7488 api_server.go:269] stopped: https://192.168.120.172:8443/healthz: Get "https://192.168.120.172:8443/healthz": read tcp 192.168.112.1:55979->192.168.120.172:8443: wsarecv: An existing connection was forcibly closed by the remote host.
	I0409 01:13:50.194269    7488 api_server.go:253] Checking apiserver healthz at https://192.168.120.172:8443/healthz ...
	I0409 01:13:55.195174    7488 api_server.go:269] stopped: https://192.168.120.172:8443/healthz: Get "https://192.168.120.172:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0409 01:13:55.195174    7488 api_server.go:253] Checking apiserver healthz at https://192.168.120.172:8443/healthz ...
	I0409 01:14:00.196759    7488 api_server.go:269] stopped: https://192.168.120.172:8443/healthz: Get "https://192.168.120.172:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0409 01:14:00.196759    7488 api_server.go:253] Checking apiserver healthz at https://192.168.120.172:8443/healthz ...
	I0409 01:14:05.197876    7488 api_server.go:269] stopped: https://192.168.120.172:8443/healthz: Get "https://192.168.120.172:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0409 01:14:05.197876    7488 api_server.go:253] Checking apiserver healthz at https://192.168.120.172:8443/healthz ...
	I0409 01:14:09.090272    7488 api_server.go:279] https://192.168.120.172:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0409 01:14:09.090383    7488 api_server.go:103] status: https://192.168.120.172:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0409 01:14:09.090462    7488 api_server.go:253] Checking apiserver healthz at https://192.168.120.172:8443/healthz ...
	I0409 01:14:09.185554    7488 api_server.go:279] https://192.168.120.172:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0409 01:14:09.185554    7488 api_server.go:103] status: https://192.168.120.172:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0409 01:14:09.185636    7488 api_server.go:253] Checking apiserver healthz at https://192.168.120.172:8443/healthz ...
	I0409 01:14:09.207753    7488 api_server.go:279] https://192.168.120.172:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0409 01:14:09.208340    7488 api_server.go:103] status: https://192.168.120.172:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0409 01:14:09.652177    7488 api_server.go:253] Checking apiserver healthz at https://192.168.120.172:8443/healthz ...
	I0409 01:14:09.660224    7488 api_server.go:279] https://192.168.120.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0409 01:14:09.660467    7488 api_server.go:103] status: https://192.168.120.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0409 01:14:10.152952    7488 api_server.go:253] Checking apiserver healthz at https://192.168.120.172:8443/healthz ...
	I0409 01:14:10.159947    7488 api_server.go:279] https://192.168.120.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0409 01:14:10.159947    7488 api_server.go:103] status: https://192.168.120.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0409 01:14:10.653548    7488 api_server.go:253] Checking apiserver healthz at https://192.168.120.172:8443/healthz ...
	I0409 01:14:10.663178    7488 api_server.go:279] https://192.168.120.172:8443/healthz returned 200:
	ok
	I0409 01:14:10.663392    7488 discovery_client.go:658] "Request Body" body=""
	I0409 01:14:10.663573    7488 round_trippers.go:470] GET https://192.168.120.172:8443/version
	I0409 01:14:10.663573    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:10.663573    7488 round_trippers.go:480]     Accept: application/json, */*
	I0409 01:14:10.663630    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:10.673416    7488 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0409 01:14:10.673472    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:10.673472    7488 round_trippers.go:587]     Content-Length: 263
	I0409 01:14:10.673472    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:10 GMT
	I0409 01:14:10.673521    7488 round_trippers.go:587]     Audit-Id: ba911d4c-d0f4-4ad7-a64c-f8dc032553cf
	I0409 01:14:10.673521    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:10.673521    7488 round_trippers.go:587]     Content-Type: application/json
	I0409 01:14:10.673521    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:10.673521    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:10.673521    7488 discovery_client.go:658] "Response Body" body=<
		{
		  "major": "1",
		  "minor": "32",
		  "gitVersion": "v1.32.2",
		  "gitCommit": "67a30c0adcf52bd3f56ff0893ce19966be12991f",
		  "gitTreeState": "clean",
		  "buildDate": "2025-02-12T21:19:47Z",
		  "goVersion": "go1.23.6",
		  "compiler": "gc",
		  "platform": "linux/amd64"
		}
	 >
	I0409 01:14:10.673521    7488 api_server.go:141] control plane version: v1.32.2
	I0409 01:14:10.673521    7488 api_server.go:131] duration metric: took 41.5216554s to wait for apiserver health ...
	I0409 01:14:10.673521    7488 cni.go:84] Creating CNI manager for ""
	I0409 01:14:10.673521    7488 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0409 01:14:10.676855    7488 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0409 01:14:10.691280    7488 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0409 01:14:10.698786    7488 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0409 01:14:10.698786    7488 command_runner.go:130] >   Size: 3103192   	Blocks: 6064       IO Block: 4096   regular file
	I0409 01:14:10.698786    7488 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0409 01:14:10.698786    7488 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0409 01:14:10.698976    7488 command_runner.go:130] > Access: 2025-04-09 01:12:01.071156800 +0000
	I0409 01:14:10.698976    7488 command_runner.go:130] > Modify: 2025-01-14 09:03:58.000000000 +0000
	I0409 01:14:10.698976    7488 command_runner.go:130] > Change: 2025-04-09 01:11:49.988000000 +0000
	I0409 01:14:10.698976    7488 command_runner.go:130] >  Birth: -
	I0409 01:14:10.699113    7488 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0409 01:14:10.699113    7488 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0409 01:14:10.751766    7488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0409 01:14:11.530995    7488 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0409 01:14:11.531078    7488 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0409 01:14:11.531078    7488 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0409 01:14:11.531078    7488 command_runner.go:130] > daemonset.apps/kindnet configured
	I0409 01:14:11.531688    7488 system_pods.go:43] waiting for kube-system pods to appear ...
	I0409 01:14:11.531791    7488 type.go:204] "Request Body" body=""
	I0409 01:14:11.531791    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods
	I0409 01:14:11.531791    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:11.531791    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:11.531791    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:11.539009    7488 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0409 01:14:11.539009    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:11.539009    7488 round_trippers.go:587]     Audit-Id: 80344bf7-1cc7-406c-a82c-34d5902f9085
	I0409 01:14:11.539009    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:11.539009    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:11.539009    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:11.539009    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:11.539009    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:11 GMT
	I0409 01:14:11.542208    7488 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 9f e3 03 0a  0a 0a 00 12 04 31 38 32  |ist..........182|
		00000020  39 1a 00 12 d4 27 0a ae  19 0a 18 63 6f 72 65 64  |9....'.....cored|
		00000030  6e 73 2d 36 36 38 64 36  62 66 39 62 63 2d 64 35  |ns-668d6bf9bc-d5|
		00000040  34 73 34 12 13 63 6f 72  65 64 6e 73 2d 36 36 38  |4s4..coredns-668|
		00000050  64 36 62 66 39 62 63 2d  1a 0b 6b 75 62 65 2d 73  |d6bf9bc-..kube-s|
		00000060  79 73 74 65 6d 22 00 2a  24 31 32 34 33 31 66 32  |ystem".*$12431f2|
		00000070  37 2d 37 65 34 65 2d 34  31 63 39 2d 38 64 35 34  |7-7e4e-41c9-8d54|
		00000080  2d 62 63 37 62 65 32 30  37 34 62 39 63 32 03 34  |-bc7be2074b9c2.4|
		00000090  33 36 38 00 42 08 08 96  88 d7 bf 06 10 00 5a 13  |368.B.........Z.|
		000000a0  0a 07 6b 38 73 2d 61 70  70 12 08 6b 75 62 65 2d  |..k8s-app..kube-|
		000000b0  64 6e 73 5a 1f 0a 11 70  6f 64 2d 74 65 6d 70 6c  |dnsZ...pod-templ|
		000000c0  61 74 65 2d 68 61 73 68  12 0a 36 36 38 64 36 62  |ate-hash..668d6 [truncated 304542 chars]
	 >
	I0409 01:14:11.543163    7488 system_pods.go:59] 12 kube-system pods found
	I0409 01:14:11.543163    7488 system_pods.go:61] "coredns-668d6bf9bc-d54s4" [12431f27-7e4e-41c9-8d54-bc7be2074b9c] Running
	I0409 01:14:11.543163    7488 system_pods.go:61] "etcd-multinode-611500" [622d9aaa-1f2f-435c-8cea-b53badba27f4] Running
	I0409 01:14:11.543163    7488 system_pods.go:61] "kindnet-66fr6" [3127adff-6b68-4ae6-8fea-cbee940bb9df] Running
	I0409 01:14:11.543163    7488 system_pods.go:61] "kindnet-v66j5" [9200b124-3c4b-442b-99fd-49ccc2faf534] Running
	I0409 01:14:11.543163    7488 system_pods.go:61] "kindnet-vntlr" [2e088361-08c9-4325-8241-20f5f443dcf6] Running
	I0409 01:14:11.543163    7488 system_pods.go:61] "kube-apiserver-multinode-611500" [50196775-bc0c-41c1-b36c-193695d2db23] Running
	I0409 01:14:11.543163    7488 system_pods.go:61] "kube-controller-manager-multinode-611500" [75af0b90-6c72-4624-8660-aa943fec9606] Running
	I0409 01:14:11.543163    7488 system_pods.go:61] "kube-proxy-bhjnx" [afb6da99-de99-49c4-b080-8500b4b08d9b] Running
	I0409 01:14:11.543163    7488 system_pods.go:61] "kube-proxy-xnh8p" [ed8e944e-e73d-444c-b1ee-d7155c771c96] Running
	I0409 01:14:11.543163    7488 system_pods.go:61] "kube-proxy-zxxgf" [3506eee7-d946-4dde-91c9-9fc5c1474434] Running
	I0409 01:14:11.543163    7488 system_pods.go:61] "kube-scheduler-multinode-611500" [9185d5c0-b28a-438c-b05a-64667e4ac3d7] Running
	I0409 01:14:11.543163    7488 system_pods.go:61] "storage-provisioner" [8f7ea37f-c3a7-44fc-ac99-c184b674aca3] Running
	I0409 01:14:11.543163    7488 system_pods.go:74] duration metric: took 11.372ms to wait for pod list to return data ...
	I0409 01:14:11.543163    7488 node_conditions.go:102] verifying NodePressure condition ...
	I0409 01:14:11.543163    7488 type.go:204] "Request Body" body=""
	I0409 01:14:11.543163    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes
	I0409 01:14:11.543163    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:11.543163    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:11.543163    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:11.547686    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:11.547686    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:11.547686    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:11 GMT
	I0409 01:14:11.547686    7488 round_trippers.go:587]     Audit-Id: 74695158-5ffa-4e89-8f7a-9977280a9f2e
	I0409 01:14:11.547686    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:11.547686    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:11.547686    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:11.547686    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:11.547686    7488 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0e 0a 02  76 31 12 08 4e 6f 64 65  |k8s.....v1..Node|
		00000010  4c 69 73 74 12 86 5d 0a  0a 0a 00 12 04 31 38 32  |List..]......182|
		00000020  39 1a 00 12 e8 23 0a 8b  11 0a 10 6d 75 6c 74 69  |9....#.....multi|
		00000030  6e 6f 64 65 2d 36 31 31  35 30 30 12 00 1a 00 22  |node-611500...."|
		00000040  00 2a 24 62 31 32 35 32  66 34 61 2d 32 32 33 30  |.*$b1252f4a-2230|
		00000050  2d 34 36 61 36 2d 39 33  38 62 2d 37 63 30 37 31  |-46a6-938b-7c071|
		00000060  31 31 33 33 34 32 34 32  04 31 36 33 31 38 00 42  |11334242.16318.B|
		00000070  08 08 8d 88 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000080  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000090  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		000000a0  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000b0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000c0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/ar [truncated 57974 chars]
	 >
	I0409 01:14:11.548672    7488 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0409 01:14:11.548672    7488 node_conditions.go:123] node cpu capacity is 2
	I0409 01:14:11.548672    7488 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0409 01:14:11.548672    7488 node_conditions.go:123] node cpu capacity is 2
	I0409 01:14:11.548672    7488 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0409 01:14:11.548672    7488 node_conditions.go:123] node cpu capacity is 2
	I0409 01:14:11.548672    7488 node_conditions.go:105] duration metric: took 5.5092ms to run NodePressure ...
	I0409 01:14:11.548672    7488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0409 01:14:11.863835    7488 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0409 01:14:11.863962    7488 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0409 01:14:11.864027    7488 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0409 01:14:11.864140    7488 type.go:204] "Request Body" body=""
	I0409 01:14:11.864140    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0409 01:14:11.864140    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:11.864140    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:11.864140    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:11.868765    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:11.868823    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:11.868875    7488 round_trippers.go:587]     Audit-Id: 2c1eff26-f41b-4508-b7ee-0c1bf6b30f0c
	I0409 01:14:11.868912    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:11.868912    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:11.868912    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:11.868912    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:11.868912    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:11 GMT
	I0409 01:14:11.869624    7488 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 dc b3 01 0a  0a 0a 00 12 04 31 38 33  |ist..........183|
		00000020  31 1a 00 12 b6 2b 0a a0  1a 0a 15 65 74 63 64 2d  |1....+.....etcd-|
		00000030  6d 75 6c 74 69 6e 6f 64  65 2d 36 31 31 35 30 30  |multinode-611500|
		00000040  12 00 1a 0b 6b 75 62 65  2d 73 79 73 74 65 6d 22  |....kube-system"|
		00000050  00 2a 24 36 32 32 64 39  61 61 61 2d 31 66 32 66  |.*$622d9aaa-1f2f|
		00000060  2d 34 33 35 63 2d 38 63  65 61 2d 62 35 33 62 61  |-435c-8cea-b53ba|
		00000070  64 62 61 32 37 66 34 32  03 33 39 35 38 00 42 08  |dba27f42.3958.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 11 0a 09 63 6f 6d 70  |........Z...comp|
		00000090  6f 6e 65 6e 74 12 04 65  74 63 64 5a 15 0a 04 74  |onent..etcdZ...t|
		000000a0  69 65 72 12 0d 63 6f 6e  74 72 6f 6c 2d 70 6c 61  |ier..control-pla|
		000000b0  6e 65 62 50 0a 30 6b 75  62 65 61 64 6d 2e 6b 75  |nebP.0kubeadm.ku|
		000000c0  62 65 72 6e 65 74 65 73  2e 69 6f 2f 65 74 63 64  |bernetes.io/etc [truncated 112727 chars]
	 >
	I0409 01:14:11.870402    7488 retry.go:31] will retry after 263.697513ms: kubelet not initialised
	I0409 01:14:12.135287    7488 type.go:204] "Request Body" body=""
	I0409 01:14:12.135287    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0409 01:14:12.135287    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:12.135287    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:12.135287    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:12.139238    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:12.139238    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:12.139298    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:12.139298    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:12.139298    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:12.139298    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:12.139298    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:12 GMT
	I0409 01:14:12.139298    7488 round_trippers.go:587]     Audit-Id: a44e476b-8e12-4bec-83da-aa8cf1a76fd8
	I0409 01:14:12.140586    7488 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 dc b3 01 0a  0a 0a 00 12 04 31 38 33  |ist..........183|
		00000020  31 1a 00 12 b6 2b 0a a0  1a 0a 15 65 74 63 64 2d  |1....+.....etcd-|
		00000030  6d 75 6c 74 69 6e 6f 64  65 2d 36 31 31 35 30 30  |multinode-611500|
		00000040  12 00 1a 0b 6b 75 62 65  2d 73 79 73 74 65 6d 22  |....kube-system"|
		00000050  00 2a 24 36 32 32 64 39  61 61 61 2d 31 66 32 66  |.*$622d9aaa-1f2f|
		00000060  2d 34 33 35 63 2d 38 63  65 61 2d 62 35 33 62 61  |-435c-8cea-b53ba|
		00000070  64 62 61 32 37 66 34 32  03 33 39 35 38 00 42 08  |dba27f42.3958.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 11 0a 09 63 6f 6d 70  |........Z...comp|
		00000090  6f 6e 65 6e 74 12 04 65  74 63 64 5a 15 0a 04 74  |onent..etcdZ...t|
		000000a0  69 65 72 12 0d 63 6f 6e  74 72 6f 6c 2d 70 6c 61  |ier..control-pla|
		000000b0  6e 65 62 50 0a 30 6b 75  62 65 61 64 6d 2e 6b 75  |nebP.0kubeadm.ku|
		000000c0  62 65 72 6e 65 74 65 73  2e 69 6f 2f 65 74 63 64  |bernetes.io/etc [truncated 112727 chars]
	 >
	I0409 01:14:12.141163    7488 retry.go:31] will retry after 343.106119ms: kubelet not initialised
	I0409 01:14:12.484664    7488 type.go:204] "Request Body" body=""
	I0409 01:14:12.484664    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0409 01:14:12.484664    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:12.484664    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:12.484664    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:12.490019    7488 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0409 01:14:12.490114    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:12.490114    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:12.490114    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:12.490114    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:12 GMT
	I0409 01:14:12.490114    7488 round_trippers.go:587]     Audit-Id: 8670cb0f-e38d-4cfe-8caf-481332afbb66
	I0409 01:14:12.490114    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:12.490114    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:12.491469    7488 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 dc b3 01 0a  0a 0a 00 12 04 31 38 33  |ist..........183|
		00000020  31 1a 00 12 b6 2b 0a a0  1a 0a 15 65 74 63 64 2d  |1....+.....etcd-|
		00000030  6d 75 6c 74 69 6e 6f 64  65 2d 36 31 31 35 30 30  |multinode-611500|
		00000040  12 00 1a 0b 6b 75 62 65  2d 73 79 73 74 65 6d 22  |....kube-system"|
		00000050  00 2a 24 36 32 32 64 39  61 61 61 2d 31 66 32 66  |.*$622d9aaa-1f2f|
		00000060  2d 34 33 35 63 2d 38 63  65 61 2d 62 35 33 62 61  |-435c-8cea-b53ba|
		00000070  64 62 61 32 37 66 34 32  03 33 39 35 38 00 42 08  |dba27f42.3958.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 11 0a 09 63 6f 6d 70  |........Z...comp|
		00000090  6f 6e 65 6e 74 12 04 65  74 63 64 5a 15 0a 04 74  |onent..etcdZ...t|
		000000a0  69 65 72 12 0d 63 6f 6e  74 72 6f 6c 2d 70 6c 61  |ier..control-pla|
		000000b0  6e 65 62 50 0a 30 6b 75  62 65 61 64 6d 2e 6b 75  |nebP.0kubeadm.ku|
		000000c0  62 65 72 6e 65 74 65 73  2e 69 6f 2f 65 74 63 64  |bernetes.io/etc [truncated 112727 chars]
	 >
	I0409 01:14:12.491896    7488 retry.go:31] will retry after 840.109319ms: kubelet not initialised
	I0409 01:14:13.332253    7488 type.go:204] "Request Body" body=""
	I0409 01:14:13.332253    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0409 01:14:13.332253    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:13.332253    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:13.332253    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:13.336668    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:13.336668    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:13.336668    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:13 GMT
	I0409 01:14:13.336668    7488 round_trippers.go:587]     Audit-Id: f5b3bf4b-a59e-4f77-9605-2bcb2dca0741
	I0409 01:14:13.337648    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:13.337648    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:13.337648    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:13.337648    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:13.338765    7488 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 dc b3 01 0a  0a 0a 00 12 04 31 38 33  |ist..........183|
		00000020  31 1a 00 12 b6 2b 0a a0  1a 0a 15 65 74 63 64 2d  |1....+.....etcd-|
		00000030  6d 75 6c 74 69 6e 6f 64  65 2d 36 31 31 35 30 30  |multinode-611500|
		00000040  12 00 1a 0b 6b 75 62 65  2d 73 79 73 74 65 6d 22  |....kube-system"|
		00000050  00 2a 24 36 32 32 64 39  61 61 61 2d 31 66 32 66  |.*$622d9aaa-1f2f|
		00000060  2d 34 33 35 63 2d 38 63  65 61 2d 62 35 33 62 61  |-435c-8cea-b53ba|
		00000070  64 62 61 32 37 66 34 32  03 33 39 35 38 00 42 08  |dba27f42.3958.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 11 0a 09 63 6f 6d 70  |........Z...comp|
		00000090  6f 6e 65 6e 74 12 04 65  74 63 64 5a 15 0a 04 74  |onent..etcdZ...t|
		000000a0  69 65 72 12 0d 63 6f 6e  74 72 6f 6c 2d 70 6c 61  |ier..control-pla|
		000000b0  6e 65 62 50 0a 30 6b 75  62 65 61 64 6d 2e 6b 75  |nebP.0kubeadm.ku|
		000000c0  62 65 72 6e 65 74 65 73  2e 69 6f 2f 65 74 63 64  |bernetes.io/etc [truncated 112727 chars]
	 >
	I0409 01:14:13.338815    7488 retry.go:31] will retry after 1.042076456s: kubelet not initialised
	I0409 01:14:14.381819    7488 type.go:204] "Request Body" body=""
	I0409 01:14:14.381819    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0409 01:14:14.381819    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:14.381819    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:14.381819    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:14.393247    7488 round_trippers.go:581] Response Status: 200 OK in 11 milliseconds
	I0409 01:14:14.393247    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:14.393247    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:14.393247    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:14.393247    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:14.393247    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:14 GMT
	I0409 01:14:14.393247    7488 round_trippers.go:587]     Audit-Id: b1f30868-724c-41b1-961f-e4d5661b2d66
	I0409 01:14:14.393247    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:14.394204    7488 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 e3 a2 01 0a  0a 0a 00 12 04 31 38 35  |ist..........185|
		00000020  36 1a 00 12 dc 20 0a d5  13 0a 15 65 74 63 64 2d  |6.... .....etcd-|
		00000030  6d 75 6c 74 69 6e 6f 64  65 2d 36 31 31 35 30 30  |multinode-611500|
		00000040  12 00 1a 0b 6b 75 62 65  2d 73 79 73 74 65 6d 22  |....kube-system"|
		00000050  00 2a 24 65 36 62 33 39  62 31 61 2d 61 36 64 35  |.*$e6b39b1a-a6d5|
		00000060  2d 34 36 64 31 2d 61 35  36 61 2d 32 34 33 63 39  |-46d1-a56a-243c9|
		00000070  62 62 36 66 35 36 33 32  04 31 38 34 35 38 00 42  |bb6f5632.18458.B|
		00000080  08 08 e6 93 d7 bf 06 10  00 5a 11 0a 09 63 6f 6d  |.........Z...com|
		00000090  70 6f 6e 65 6e 74 12 04  65 74 63 64 5a 15 0a 04  |ponent..etcdZ...|
		000000a0  74 69 65 72 12 0d 63 6f  6e 74 72 6f 6c 2d 70 6c  |tier..control-pl|
		000000b0  61 6e 65 62 50 0a 30 6b  75 62 65 61 64 6d 2e 6b  |anebP.0kubeadm.k|
		000000c0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 65 74 63  |ubernetes.io/et [truncated 101990 chars]
	 >
	I0409 01:14:14.394870    7488 kubeadm.go:739] kubelet initialised
	I0409 01:14:14.394870    7488 kubeadm.go:740] duration metric: took 2.5307302s waiting for restarted kubelet to initialise ...
	I0409 01:14:14.394935    7488 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0409 01:14:14.395030    7488 type.go:204] "Request Body" body=""
	I0409 01:14:14.395056    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods
	I0409 01:14:14.395056    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:14.395056    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:14.395056    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:14.406740    7488 round_trippers.go:581] Response Status: 200 OK in 11 milliseconds
	I0409 01:14:14.406740    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:14.406740    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:14.406740    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:14.406740    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:14.406740    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:14.406740    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:14 GMT
	I0409 01:14:14.406740    7488 round_trippers.go:587]     Audit-Id: 80dfbf8a-d183-457a-bdcc-c4736198db4c
	I0409 01:14:14.408635    7488 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 c4 d4 03 0a  0a 0a 00 12 04 31 38 35  |ist..........185|
		00000020  37 1a 00 12 d4 27 0a ae  19 0a 18 63 6f 72 65 64  |7....'.....cored|
		00000030  6e 73 2d 36 36 38 64 36  62 66 39 62 63 2d 64 35  |ns-668d6bf9bc-d5|
		00000040  34 73 34 12 13 63 6f 72  65 64 6e 73 2d 36 36 38  |4s4..coredns-668|
		00000050  64 36 62 66 39 62 63 2d  1a 0b 6b 75 62 65 2d 73  |d6bf9bc-..kube-s|
		00000060  79 73 74 65 6d 22 00 2a  24 31 32 34 33 31 66 32  |ystem".*$12431f2|
		00000070  37 2d 37 65 34 65 2d 34  31 63 39 2d 38 64 35 34  |7-7e4e-41c9-8d54|
		00000080  2d 62 63 37 62 65 32 30  37 34 62 39 63 32 03 34  |-bc7be2074b9c2.4|
		00000090  33 36 38 00 42 08 08 96  88 d7 bf 06 10 00 5a 13  |368.B.........Z.|
		000000a0  0a 07 6b 38 73 2d 61 70  70 12 08 6b 75 62 65 2d  |..k8s-app..kube-|
		000000b0  64 6e 73 5a 1f 0a 11 70  6f 64 2d 74 65 6d 70 6c  |dnsZ...pod-templ|
		000000c0  61 74 65 2d 68 61 73 68  12 0a 36 36 38 64 36 62  |ate-hash..668d6 [truncated 295225 chars]
	 >
	I0409 01:14:14.409631    7488 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-d54s4" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:14.409631    7488 type.go:168] "Request Body" body=""
	I0409 01:14:14.409631    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-d54s4
	I0409 01:14:14.409631    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:14.409631    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:14.409631    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:14.415014    7488 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0409 01:14:14.415109    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:14.415109    7488 round_trippers.go:587]     Audit-Id: e44d81c6-f5a6-40d1-9812-2172e63ebd4e
	I0409 01:14:14.415109    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:14.415109    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:14.415109    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:14.415109    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:14.415109    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:14 GMT
	I0409 01:14:14.415789    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  d4 27 0a ae 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.'.....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 64 35 34 73 34 12  |68d6bf9bc-d54s4.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 31 32 34  33 31 66 32 37 2d 37 65  |m".*$12431f27-7e|
		00000060  34 65 2d 34 31 63 39 2d  38 64 35 34 2d 62 63 37  |4e-41c9-8d54-bc7|
		00000070  62 65 32 30 37 34 62 39  63 32 03 34 33 36 38 00  |be2074b9c2.4368.|
		00000080  42 08 08 96 88 d7 bf 06  10 00 5a 13 0a 07 6b 38  |B.........Z...k8|
		00000090  73 2d 61 70 70 12 08 6b  75 62 65 2d 64 6e 73 5a  |s-app..kube-dnsZ|
		000000a0  1f 0a 11 70 6f 64 2d 74  65 6d 70 6c 61 74 65 2d  |...pod-template-|
		000000b0  68 61 73 68 12 0a 36 36  38 64 36 62 66 39 62 63  |hash..668d6bf9bc|
		000000c0  6a 53 0a 0a 52 65 70 6c  69 63 61 53 65 74 1a 12  |jS..ReplicaSet. [truncated 24171 chars]
	 >
	I0409 01:14:14.416164    7488 type.go:168] "Request Body" body=""
	I0409 01:14:14.416223    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:14.416288    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:14.416308    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:14.416308    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:14.426390    7488 round_trippers.go:581] Response Status: 200 OK in 10 milliseconds
	I0409 01:14:14.426526    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:14.426526    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:14.426526    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:14 GMT
	I0409 01:14:14.426526    7488 round_trippers.go:587]     Audit-Id: f38b70b1-d268-4582-8b21-9ab6d1c8b264
	I0409 01:14:14.426585    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:14.426585    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:14.426585    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:14.426585    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:14.426585    7488 pod_ready.go:98] node "multinode-611500" hosting pod "coredns-668d6bf9bc-d54s4" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-611500" has status "Ready":"False"
	I0409 01:14:14.427143    7488 pod_ready.go:82] duration metric: took 17.5117ms for pod "coredns-668d6bf9bc-d54s4" in "kube-system" namespace to be "Ready" ...
	E0409 01:14:14.427221    7488 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-611500" hosting pod "coredns-668d6bf9bc-d54s4" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-611500" has status "Ready":"False"
	I0409 01:14:14.427221    7488 pod_ready.go:79] waiting up to 4m0s for pod "etcd-multinode-611500" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:14.427221    7488 type.go:168] "Request Body" body=""
	I0409 01:14:14.427319    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-611500
	I0409 01:14:14.427319    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:14.427385    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:14.427385    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:14.438634    7488 round_trippers.go:581] Response Status: 200 OK in 11 milliseconds
	I0409 01:14:14.438634    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:14.438725    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:14.438725    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:14.438725    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:14.438725    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:14.438725    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:14 GMT
	I0409 01:14:14.438725    7488 round_trippers.go:587]     Audit-Id: 9ea8e671-aedc-4698-a292-d36065631723
	I0409 01:14:14.440001    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  dc 20 0a d5 13 0a 15 65  74 63 64 2d 6d 75 6c 74  |. .....etcd-mult|
		00000020  69 6e 6f 64 65 2d 36 31  31 35 30 30 12 00 1a 0b  |inode-611500....|
		00000030  6b 75 62 65 2d 73 79 73  74 65 6d 22 00 2a 24 65  |kube-system".*$e|
		00000040  36 62 33 39 62 31 61 2d  61 36 64 35 2d 34 36 64  |6b39b1a-a6d5-46d|
		00000050  31 2d 61 35 36 61 2d 32  34 33 63 39 62 62 36 66  |1-a56a-243c9bb6f|
		00000060  35 36 33 32 04 31 38 34  35 38 00 42 08 08 e6 93  |5632.18458.B....|
		00000070  d7 bf 06 10 00 5a 11 0a  09 63 6f 6d 70 6f 6e 65  |.....Z...compone|
		00000080  6e 74 12 04 65 74 63 64  5a 15 0a 04 74 69 65 72  |nt..etcdZ...tier|
		00000090  12 0d 63 6f 6e 74 72 6f  6c 2d 70 6c 61 6e 65 62  |..control-planeb|
		000000a0  50 0a 30 6b 75 62 65 61  64 6d 2e 6b 75 62 65 72  |P.0kubeadm.kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 65 74 63 64 2e 61 64  |netes.io/etcd.ad|
		000000c0  76 65 72 74 69 73 65 2d  63 6c 69 65 6e 74 2d 75  |vertise-client- [truncated 19818 chars]
	 >
	I0409 01:14:14.440245    7488 type.go:168] "Request Body" body=""
	I0409 01:14:14.440331    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:14.440355    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:14.440355    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:14.440355    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:14.445264    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:14.445264    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:14.446267    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:14.446267    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:14.446267    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:14.446267    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:14.446267    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:14 GMT
	I0409 01:14:14.446267    7488 round_trippers.go:587]     Audit-Id: 565ebd19-beaa-4aa4-acf5-1695abbe0ff6
	I0409 01:14:14.447266    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:14.447266    7488 pod_ready.go:98] node "multinode-611500" hosting pod "etcd-multinode-611500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-611500" has status "Ready":"False"
	I0409 01:14:14.447266    7488 pod_ready.go:82] duration metric: took 20.045ms for pod "etcd-multinode-611500" in "kube-system" namespace to be "Ready" ...
	E0409 01:14:14.447266    7488 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-611500" hosting pod "etcd-multinode-611500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-611500" has status "Ready":"False"
	I0409 01:14:14.447266    7488 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-multinode-611500" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:14.448291    7488 type.go:168] "Request Body" body=""
	I0409 01:14:14.448291    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-611500
	I0409 01:14:14.448291    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:14.448291    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:14.448291    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:14.457260    7488 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0409 01:14:14.457260    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:14.457260    7488 round_trippers.go:587]     Audit-Id: 8591cffe-4d7e-4de9-990e-1a48388c13b4
	I0409 01:14:14.457260    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:14.457260    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:14.457260    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:14.457260    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:14.457260    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:14 GMT
	I0409 01:14:14.458262    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  a2 29 0a e5 15 0a 1f 6b  75 62 65 2d 61 70 69 73  |.).....kube-apis|
		00000020  65 72 76 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |erver-multinode-|
		00000030  36 31 31 35 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |611500....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 66 39 39 32 34 37 35  |ystem".*$f992475|
		00000050  34 2d 66 38 63 35 2d 34  61 38 62 2d 39 64 61 32  |4-f8c5-4a8b-9da2|
		00000060  2d 32 33 64 38 30 39 36  61 35 65 63 66 32 04 31  |-23d8096a5ecf2.1|
		00000070  38 34 33 38 00 42 08 08  e6 93 d7 bf 06 10 00 5a  |8438.B.........Z|
		00000080  1b 0a 09 63 6f 6d 70 6f  6e 65 6e 74 12 0e 6b 75  |...component..ku|
		00000090  62 65 2d 61 70 69 73 65  72 76 65 72 5a 15 0a 04  |be-apiserverZ...|
		000000a0  74 69 65 72 12 0d 63 6f  6e 74 72 6f 6c 2d 70 6c  |tier..control-pl|
		000000b0  61 6e 65 62 57 0a 3f 6b  75 62 65 61 64 6d 2e 6b  |anebW.?kubeadm.k|
		000000c0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 6b 75 62  |ubernetes.io/ku [truncated 25196 chars]
	 >
	I0409 01:14:14.458262    7488 type.go:168] "Request Body" body=""
	I0409 01:14:14.458262    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:14.458262    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:14.458262    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:14.458262    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:14.476279    7488 round_trippers.go:581] Response Status: 200 OK in 18 milliseconds
	I0409 01:14:14.476467    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:14.476467    7488 round_trippers.go:587]     Audit-Id: 9fcee71f-bea5-4797-a073-be0260c50827
	I0409 01:14:14.476467    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:14.476467    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:14.476467    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:14.476467    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:14.476467    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:14 GMT
	I0409 01:14:14.478841    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:14.478963    7488 pod_ready.go:98] node "multinode-611500" hosting pod "kube-apiserver-multinode-611500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-611500" has status "Ready":"False"
	I0409 01:14:14.479116    7488 pod_ready.go:82] duration metric: took 31.8497ms for pod "kube-apiserver-multinode-611500" in "kube-system" namespace to be "Ready" ...
	E0409 01:14:14.479116    7488 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-611500" hosting pod "kube-apiserver-multinode-611500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-611500" has status "Ready":"False"
	I0409 01:14:14.479116    7488 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-multinode-611500" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:14.479116    7488 type.go:168] "Request Body" body=""
	I0409 01:14:14.479116    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:14.479116    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:14.479116    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:14.479348    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:14.483291    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:14.483705    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:14.483705    7488 round_trippers.go:587]     Audit-Id: 48cb31cc-eee3-42d8-92a6-2b4f229e1d67
	I0409 01:14:14.483705    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:14.483705    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:14.483705    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:14.483705    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:14.483705    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:14 GMT
	I0409 01:14:14.484147    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  de 34 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.4....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 38 35 36 38 00 42 08  |ec96062.18568.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 32460 chars]
	 >
	I0409 01:14:14.484421    7488 type.go:168] "Request Body" body=""
	I0409 01:14:14.484482    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:14.484482    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:14.484482    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:14.484482    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:14.507628    7488 round_trippers.go:581] Response Status: 200 OK in 22 milliseconds
	I0409 01:14:14.507628    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:14.507705    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:14.507705    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:14 GMT
	I0409 01:14:14.507705    7488 round_trippers.go:587]     Audit-Id: bac051f1-d33a-4c5f-9474-3a8604a40910
	I0409 01:14:14.507705    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:14.507705    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:14.507705    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:14.508324    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:14.508532    7488 pod_ready.go:98] node "multinode-611500" hosting pod "kube-controller-manager-multinode-611500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-611500" has status "Ready":"False"
	I0409 01:14:14.508569    7488 pod_ready.go:82] duration metric: took 29.4531ms for pod "kube-controller-manager-multinode-611500" in "kube-system" namespace to be "Ready" ...
	E0409 01:14:14.508569    7488 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-611500" hosting pod "kube-controller-manager-multinode-611500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-611500" has status "Ready":"False"
	I0409 01:14:14.508569    7488 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-bhjnx" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:14.508695    7488 type.go:168] "Request Body" body=""
	I0409 01:14:14.582287    7488 request.go:661] Waited for 73.5912ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bhjnx
	I0409 01:14:14.582287    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bhjnx
	I0409 01:14:14.582287    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:14.582287    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:14.582287    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:14.585309    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:14.586083    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:14.586083    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:14.586083    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:14 GMT
	I0409 01:14:14.586083    7488 round_trippers.go:587]     Audit-Id: c3d6e252-6b44-47bd-b0a6-53c87161488b
	I0409 01:14:14.586083    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:14.586168    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:14.586168    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:14.589099    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  af 25 0a c1 14 0a 10 6b  75 62 65 2d 70 72 6f 78  |.%.....kube-prox|
		00000020  79 2d 62 68 6a 6e 78 12  0b 6b 75 62 65 2d 70 72  |y-bhjnx..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 61 66 62  36 64 61 39 39 2d 64 65  |m".*$afb6da99-de|
		00000050  39 39 2d 34 39 63 34 2d  62 30 38 30 2d 38 35 30  |99-49c4-b080-850|
		00000060  30 62 34 62 30 38 64 39  62 32 03 36 32 35 38 00  |0b4b08d9b2.6258.|
		00000070  42 08 08 d1 89 d7 bf 06  10 00 5a 26 0a 18 63 6f  |B.........Z&..co|
		00000080  6e 74 72 6f 6c 6c 65 72  2d 72 65 76 69 73 69 6f  |ntroller-revisio|
		00000090  6e 2d 68 61 73 68 12 0a  37 62 62 38 34 63 34 39  |n-hash..7bb84c49|
		000000a0  38 34 5a 15 0a 07 6b 38  73 2d 61 70 70 12 0a 6b  |84Z...k8s-app..k|
		000000b0  75 62 65 2d 70 72 6f 78  79 5a 1c 0a 17 70 6f 64  |ube-proxyZ...pod|
		000000c0  2d 74 65 6d 70 6c 61 74  65 2d 67 65 6e 65 72 61  |-template-gener [truncated 22744 chars]
	 >
	I0409 01:14:14.589473    7488 type.go:168] "Request Body" body=""
	I0409 01:14:14.782118    7488 request.go:661] Waited for 192.6432ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.120.172:8443/api/v1/nodes/multinode-611500-m02
	I0409 01:14:14.782118    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500-m02
	I0409 01:14:14.782118    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:14.782118    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:14.782118    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:14.795223    7488 round_trippers.go:581] Response Status: 200 OK in 13 milliseconds
	I0409 01:14:14.795223    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:14.795308    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:14.795308    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:14.795308    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:14.795308    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:14.795308    7488 round_trippers.go:587]     Content-Length: 3466
	I0409 01:14:14.795308    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:14 GMT
	I0409 01:14:14.795308    7488 round_trippers.go:587]     Audit-Id: 39570308-c6d2-434b-ac7b-6ed1988bcc3b
	I0409 01:14:14.796124    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 f3 1a 0a b0 0f 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 04 31 37 37 34 38 00  |bd39faf32.17748.|
		00000060  42 08 08 d1 89 d7 bf 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 16113 chars]
	 >
	I0409 01:14:14.796124    7488 pod_ready.go:93] pod "kube-proxy-bhjnx" in "kube-system" namespace has status "Ready":"True"
	I0409 01:14:14.796124    7488 pod_ready.go:82] duration metric: took 287.551ms for pod "kube-proxy-bhjnx" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:14.796124    7488 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xnh8p" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:14.796124    7488 type.go:168] "Request Body" body=""
	I0409 01:14:14.983178    7488 request.go:661] Waited for 187.0516ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xnh8p
	I0409 01:14:14.983178    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xnh8p
	I0409 01:14:14.983178    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:14.983178    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:14.983178    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:14.988143    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:14.988289    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:14.988289    7488 round_trippers.go:587]     Audit-Id: 3341b43c-fefd-4d3f-9f64-df43e1c356b9
	I0409 01:14:14.988289    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:14.988289    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:14.988289    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:14.988289    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:14.988385    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:15 GMT
	I0409 01:14:14.988822    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b4 26 0a c5 15 0a 10 6b  75 62 65 2d 70 72 6f 78  |.&.....kube-prox|
		00000020  79 2d 78 6e 68 38 70 12  0b 6b 75 62 65 2d 70 72  |y-xnh8p..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 65 64 38  65 39 34 34 65 2d 65 37  |m".*$ed8e944e-e7|
		00000050  33 64 2d 34 34 34 63 2d  62 31 65 65 2d 64 37 31  |3d-444c-b1ee-d71|
		00000060  35 35 63 37 37 31 63 39  36 32 04 31 38 31 31 38  |55c771c962.18118|
		00000070  00 42 08 08 f5 8b d7 bf  06 10 00 5a 26 0a 18 63  |.B.........Z&..c|
		00000080  6f 6e 74 72 6f 6c 6c 65  72 2d 72 65 76 69 73 69  |ontroller-revisi|
		00000090  6f 6e 2d 68 61 73 68 12  0a 37 62 62 38 34 63 34  |on-hash..7bb84c4|
		000000a0  39 38 34 5a 15 0a 07 6b  38 73 2d 61 70 70 12 0a  |984Z...k8s-app..|
		000000b0  6b 75 62 65 2d 70 72 6f  78 79 5a 1c 0a 17 70 6f  |kube-proxyZ...po|
		000000c0  64 2d 74 65 6d 70 6c 61  74 65 2d 67 65 6e 65 72  |d-template-gene [truncated 23381 chars]
	 >
	I0409 01:14:14.988822    7488 type.go:168] "Request Body" body=""
	I0409 01:14:15.182398    7488 request.go:661] Waited for 193.5734ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.120.172:8443/api/v1/nodes/multinode-611500-m03
	I0409 01:14:15.182398    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500-m03
	I0409 01:14:15.182398    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:15.182398    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:15.182398    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:15.188224    7488 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0409 01:14:15.188224    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:15.188224    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:15.188224    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:15.188224    7488 round_trippers.go:587]     Content-Length: 3885
	I0409 01:14:15.188224    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:15 GMT
	I0409 01:14:15.188224    7488 round_trippers.go:587]     Audit-Id: a0529b4d-6e8f-4763-952a-9e4e34eed07a
	I0409 01:14:15.188224    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:15.188224    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:15.188785    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 96 1e 0a eb 12 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 33 12 00 1a 00  |e-611500-m03....|
		00000030  22 00 2a 24 38 63 66 33  37 34 64 36 2d 31 66 62  |".*$8cf374d6-1fb|
		00000040  30 2d 34 30 36 38 2d 39  62 66 39 2d 30 62 32 37  |0-4068-9bf9-0b27|
		00000050  61 34 32 61 63 66 34 39  32 04 31 38 31 38 38 00  |a42acf492.18188.|
		00000060  42 08 08 a0 91 d7 bf 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 18170 chars]
	 >
	I0409 01:14:15.188915    7488 pod_ready.go:98] node "multinode-611500-m03" hosting pod "kube-proxy-xnh8p" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-611500-m03" has status "Ready":"Unknown"
	I0409 01:14:15.189004    7488 pod_ready.go:82] duration metric: took 392.8751ms for pod "kube-proxy-xnh8p" in "kube-system" namespace to be "Ready" ...
	E0409 01:14:15.189091    7488 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-611500-m03" hosting pod "kube-proxy-xnh8p" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-611500-m03" has status "Ready":"Unknown"
	I0409 01:14:15.189091    7488 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-zxxgf" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:15.189158    7488 type.go:168] "Request Body" body=""
	I0409 01:14:15.382832    7488 request.go:661] Waited for 193.6059ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zxxgf
	I0409 01:14:15.382832    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zxxgf
	I0409 01:14:15.382832    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:15.382832    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:15.382832    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:15.389387    7488 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0409 01:14:15.389502    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:15.389502    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:15.389502    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:15.389502    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:15 GMT
	I0409 01:14:15.389502    7488 round_trippers.go:587]     Audit-Id: 24ac58f6-7edd-4ff2-98f8-4d7325262b04
	I0409 01:14:15.389502    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:15.389502    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:15.389954    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  c3 27 0a fc 14 0a 10 6b  75 62 65 2d 70 72 6f 78  |.'.....kube-prox|
		00000020  79 2d 7a 78 78 67 66 12  0b 6b 75 62 65 2d 70 72  |y-zxxgf..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 33 35 30  36 65 65 65 37 2d 64 39  |m".*$3506eee7-d9|
		00000050  34 36 2d 34 64 64 65 2d  39 31 63 39 2d 39 66 63  |46-4dde-91c9-9fc|
		00000060  35 63 31 34 37 34 34 33  34 32 04 31 38 36 32 38  |5c14744342.18628|
		00000070  00 42 08 08 96 88 d7 bf  06 10 00 5a 26 0a 18 63  |.B.........Z&..c|
		00000080  6f 6e 74 72 6f 6c 6c 65  72 2d 72 65 76 69 73 69  |ontroller-revisi|
		00000090  6f 6e 2d 68 61 73 68 12  0a 37 62 62 38 34 63 34  |on-hash..7bb84c4|
		000000a0  39 38 34 5a 15 0a 07 6b  38 73 2d 61 70 70 12 0a  |984Z...k8s-app..|
		000000b0  6b 75 62 65 2d 70 72 6f  78 79 5a 1c 0a 17 70 6f  |kube-proxyZ...po|
		000000c0  64 2d 74 65 6d 70 6c 61  74 65 2d 67 65 6e 65 72  |d-template-gene [truncated 24091 chars]
	 >
	I0409 01:14:15.390360    7488 type.go:168] "Request Body" body=""
	I0409 01:14:15.582580    7488 request.go:661] Waited for 192.2183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:15.582580    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:15.582580    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:15.582580    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:15.582580    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:15.587541    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:15.587607    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:15.587607    7488 round_trippers.go:587]     Audit-Id: 601845b6-2c1c-426f-a464-38e705f48b9f
	I0409 01:14:15.587607    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:15.587607    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:15.587607    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:15.587607    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:15.587607    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:15 GMT
	I0409 01:14:15.588141    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:15.588409    7488 pod_ready.go:98] node "multinode-611500" hosting pod "kube-proxy-zxxgf" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-611500" has status "Ready":"False"
	I0409 01:14:15.588409    7488 pod_ready.go:82] duration metric: took 399.3121ms for pod "kube-proxy-zxxgf" in "kube-system" namespace to be "Ready" ...
	E0409 01:14:15.588409    7488 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-611500" hosting pod "kube-proxy-zxxgf" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-611500" has status "Ready":"False"
	I0409 01:14:15.588409    7488 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-multinode-611500" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:15.588409    7488 type.go:168] "Request Body" body=""
	I0409 01:14:15.782632    7488 request.go:661] Waited for 194.2212ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-611500
	I0409 01:14:15.782632    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-611500
	I0409 01:14:15.782632    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:15.782632    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:15.782632    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:15.788010    7488 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0409 01:14:15.788081    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:15.788081    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:15.788081    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:15 GMT
	I0409 01:14:15.788139    7488 round_trippers.go:587]     Audit-Id: ed3e344e-fca2-47fd-8af9-9a3685b601cf
	I0409 01:14:15.788139    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:15.788139    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:15.788139    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:15.788139    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  ef 23 0a 84 18 0a 1f 6b  75 62 65 2d 73 63 68 65  |.#.....kube-sche|
		00000020  64 75 6c 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |duler-multinode-|
		00000030  36 31 31 35 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |611500....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 39 31 38 35 64 35 63  |ystem".*$9185d5c|
		00000050  30 2d 62 32 38 61 2d 34  33 38 63 2d 62 30 35 61  |0-b28a-438c-b05a|
		00000060  2d 36 34 36 36 37 65 34  61 63 33 64 37 32 04 31  |-64667e4ac3d72.1|
		00000070  38 35 33 38 00 42 08 08  90 88 d7 bf 06 10 00 5a  |8538.B.........Z|
		00000080  1b 0a 09 63 6f 6d 70 6f  6e 65 6e 74 12 0e 6b 75  |...component..ku|
		00000090  62 65 2d 73 63 68 65 64  75 6c 65 72 5a 15 0a 04  |be-schedulerZ...|
		000000a0  74 69 65 72 12 0d 63 6f  6e 74 72 6f 6c 2d 70 6c  |tier..control-pl|
		000000b0  61 6e 65 62 3d 0a 19 6b  75 62 65 72 6e 65 74 65  |aneb=..kubernete|
		000000c0  73 2e 69 6f 2f 63 6f 6e  66 69 67 2e 68 61 73 68  |s.io/config.has [truncated 21796 chars]
	 >
	I0409 01:14:15.788798    7488 type.go:168] "Request Body" body=""
	I0409 01:14:15.982404    7488 request.go:661] Waited for 193.6042ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:15.982404    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:15.982404    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:15.982404    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:15.982404    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:15.990714    7488 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0409 01:14:15.990794    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:15.990794    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:15.990794    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:15.990794    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:15.990794    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:16 GMT
	I0409 01:14:15.990794    7488 round_trippers.go:587]     Audit-Id: 20f9833e-db1a-41fb-aad3-d8cb4f7eb03a
	I0409 01:14:15.990794    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:15.991451    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:15.991599    7488 pod_ready.go:98] node "multinode-611500" hosting pod "kube-scheduler-multinode-611500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-611500" has status "Ready":"False"
	I0409 01:14:15.991599    7488 pod_ready.go:82] duration metric: took 403.1848ms for pod "kube-scheduler-multinode-611500" in "kube-system" namespace to be "Ready" ...
	E0409 01:14:15.991599    7488 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-611500" hosting pod "kube-scheduler-multinode-611500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-611500" has status "Ready":"False"
	I0409 01:14:15.991599    7488 pod_ready.go:39] duration metric: took 1.5966436s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0409 01:14:15.991599    7488 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0409 01:14:16.010734    7488 command_runner.go:130] > -16
	I0409 01:14:16.011197    7488 ops.go:34] apiserver oom_adj: -16
	I0409 01:14:16.011197    7488 kubeadm.go:597] duration metric: took 51.0097751s to restartPrimaryControlPlane
	I0409 01:14:16.011197    7488 kubeadm.go:394] duration metric: took 51.0777842s to StartCluster
	I0409 01:14:16.011197    7488 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 01:14:16.011197    7488 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0409 01:14:16.013192    7488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 01:14:16.014170    7488 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.120.172 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0409 01:14:16.014170    7488 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0409 01:14:16.015192    7488 config.go:182] Loaded profile config "multinode-611500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0409 01:14:16.022178    7488 out.go:177] * Verifying Kubernetes components...
	I0409 01:14:16.027500    7488 out.go:177] * Enabled addons: 
	I0409 01:14:16.036248    7488 addons.go:514] duration metric: took 22.0781ms for enable addons: enabled=[]
	I0409 01:14:16.047271    7488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0409 01:14:16.348939    7488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0409 01:14:16.375040    7488 node_ready.go:35] waiting up to 6m0s for node "multinode-611500" to be "Ready" ...
	I0409 01:14:16.375343    7488 type.go:168] "Request Body" body=""
	I0409 01:14:16.375483    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:16.375483    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:16.375483    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:16.375483    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:16.380130    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:16.380130    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:16.380130    7488 round_trippers.go:587]     Audit-Id: fc6ce43f-720a-4f39-bc1d-e97aadb432cc
	I0409 01:14:16.380130    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:16.380130    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:16.380130    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:16.380130    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:16.380130    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:16 GMT
	I0409 01:14:16.380130    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:16.876118    7488 type.go:168] "Request Body" body=""
	I0409 01:14:16.876118    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:16.876118    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:16.876118    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:16.876118    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:16.880600    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:16.880700    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:16.880899    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:16 GMT
	I0409 01:14:16.880899    7488 round_trippers.go:587]     Audit-Id: 81456f43-064f-4e92-8c70-89edd8e0cda5
	I0409 01:14:16.880899    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:16.880899    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:16.880899    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:16.880899    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:16.881327    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:17.376254    7488 type.go:168] "Request Body" body=""
	I0409 01:14:17.376254    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:17.376254    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:17.376254    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:17.376254    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:17.380638    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:17.380638    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:17.380638    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:17.380638    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:17 GMT
	I0409 01:14:17.380638    7488 round_trippers.go:587]     Audit-Id: e2cd0d90-1465-45bc-8aa1-1053d997219c
	I0409 01:14:17.380638    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:17.380638    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:17.380638    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:17.381302    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:17.876369    7488 type.go:168] "Request Body" body=""
	I0409 01:14:17.876417    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:17.876417    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:17.876417    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:17.876417    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:17.880422    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:17.880545    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:17.880545    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:17.880545    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:17.880623    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:17 GMT
	I0409 01:14:17.880623    7488 round_trippers.go:587]     Audit-Id: 1c66ac7a-9439-486e-8cb5-15eb0e3a4d54
	I0409 01:14:17.880623    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:17.880623    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:17.880669    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:18.376308    7488 type.go:168] "Request Body" body=""
	I0409 01:14:18.376308    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:18.376308    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:18.376308    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:18.376308    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:18.380485    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:18.380519    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:18.380519    7488 round_trippers.go:587]     Audit-Id: 8262bd89-bf5e-4951-be93-7eb4a8156f5a
	I0409 01:14:18.380519    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:18.380519    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:18.380519    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:18.380519    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:18.380519    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:18 GMT
	I0409 01:14:18.381086    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:18.381417    7488 node_ready.go:53] node "multinode-611500" has status "Ready":"False"
	I0409 01:14:18.875942    7488 type.go:168] "Request Body" body=""
	I0409 01:14:18.875972    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:18.875972    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:18.875972    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:18.875972    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:18.880930    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:18.880930    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:18.880930    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:18 GMT
	I0409 01:14:18.880930    7488 round_trippers.go:587]     Audit-Id: 4e81f934-494c-436c-877f-8a8e32822b3c
	I0409 01:14:18.880930    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:18.880930    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:18.880930    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:18.880930    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:18.881559    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:19.376859    7488 type.go:168] "Request Body" body=""
	I0409 01:14:19.377014    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:19.377014    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:19.377014    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:19.377014    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:19.381388    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:19.381519    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:19.381519    7488 round_trippers.go:587]     Audit-Id: 295eab8e-a84c-4cc9-bc58-b5a7fcaa4eee
	I0409 01:14:19.381519    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:19.381519    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:19.381519    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:19.381519    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:19.381519    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:19 GMT
	I0409 01:14:19.381892    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:19.876049    7488 type.go:168] "Request Body" body=""
	I0409 01:14:19.876049    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:19.876049    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:19.876049    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:19.876407    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:19.882533    7488 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0409 01:14:19.882624    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:19.882624    7488 round_trippers.go:587]     Audit-Id: 19dd1885-2d2f-4997-9349-5d930c23f77f
	I0409 01:14:19.882624    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:19.882684    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:19.882684    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:19.882684    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:19.882708    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:19 GMT
	I0409 01:14:19.884440    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:20.376575    7488 type.go:168] "Request Body" body=""
	I0409 01:14:20.376575    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:20.376575    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:20.376575    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:20.376575    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:20.380056    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:20.380056    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:20.380204    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:20.380204    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:20.380204    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:20.380204    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:20.380204    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:20 GMT
	I0409 01:14:20.380204    7488 round_trippers.go:587]     Audit-Id: d44198ba-63c2-4dcf-ba97-994666a9cf58
	I0409 01:14:20.380364    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:20.877005    7488 type.go:168] "Request Body" body=""
	I0409 01:14:20.877005    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:20.877005    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:20.877005    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:20.877005    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:20.881552    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:20.881552    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:20.881552    7488 round_trippers.go:587]     Audit-Id: 204aed9f-006c-438c-894c-6c70826a68e6
	I0409 01:14:20.881552    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:20.881552    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:20.881552    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:20.881552    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:20.881552    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:20 GMT
	I0409 01:14:20.882233    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:20.882419    7488 node_ready.go:53] node "multinode-611500" has status "Ready":"False"
	I0409 01:14:21.375520    7488 type.go:168] "Request Body" body=""
	I0409 01:14:21.375520    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:21.375520    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:21.375520    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:21.375520    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:21.381557    7488 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0409 01:14:21.381635    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:21.381635    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:21.381635    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:21.381635    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:21.381635    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:21.381635    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:21 GMT
	I0409 01:14:21.381635    7488 round_trippers.go:587]     Audit-Id: 21c63af6-60a4-420c-aa59-14f090cba1c6
	I0409 01:14:21.382576    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:21.876421    7488 type.go:168] "Request Body" body=""
	I0409 01:14:21.876421    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:21.876421    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:21.876421    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:21.876421    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:21.881007    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:21.881063    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:21.881063    7488 round_trippers.go:587]     Audit-Id: 77d4cf1d-e7c9-4957-b93f-bb82f50009de
	I0409 01:14:21.881063    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:21.881063    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:21.881063    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:21.881063    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:21.881063    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:21 GMT
	I0409 01:14:21.881434    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:22.376630    7488 type.go:168] "Request Body" body=""
	I0409 01:14:22.376630    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:22.376630    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:22.376630    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:22.376630    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:22.381213    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:22.381213    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:22.381213    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:22.381213    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:22.381213    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:22.381213    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:22.381213    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:22 GMT
	I0409 01:14:22.381213    7488 round_trippers.go:587]     Audit-Id: 1a0200a0-cf44-4b10-a750-99add4779cf5
	I0409 01:14:22.381586    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:22.876370    7488 type.go:168] "Request Body" body=""
	I0409 01:14:22.876370    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:22.876370    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:22.876370    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:22.876370    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:22.881439    7488 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0409 01:14:22.881508    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:22.881508    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:22.881508    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:22.881508    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:22.881508    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:22.881508    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:22 GMT
	I0409 01:14:22.881588    7488 round_trippers.go:587]     Audit-Id: cdbdc35d-6f3e-433f-a040-a905d13a13c9
	I0409 01:14:22.882003    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:23.375586    7488 type.go:168] "Request Body" body=""
	I0409 01:14:23.375586    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:23.375586    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:23.375586    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:23.375586    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:23.379943    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:23.379943    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:23.379943    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:23.380106    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:23.380106    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:23 GMT
	I0409 01:14:23.380106    7488 round_trippers.go:587]     Audit-Id: 85457c91-3b37-4c5f-a2ea-f20e3ae074b7
	I0409 01:14:23.380106    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:23.380106    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:23.380447    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:23.380765    7488 node_ready.go:53] node "multinode-611500" has status "Ready":"False"
	I0409 01:14:23.876110    7488 type.go:168] "Request Body" body=""
	I0409 01:14:23.876110    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:23.876110    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:23.876110    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:23.876110    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:23.879671    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:23.879671    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:23.879671    7488 round_trippers.go:587]     Audit-Id: 4c98b0ef-ca36-469d-a4e3-ebd0477fee9b
	I0409 01:14:23.879671    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:23.879671    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:23.879671    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:23.879671    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:23.879671    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:23 GMT
	I0409 01:14:23.879671    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:24.376115    7488 type.go:168] "Request Body" body=""
	I0409 01:14:24.376115    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:24.376115    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:24.376115    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:24.376115    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:24.380031    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:24.380120    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:24.380120    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:24 GMT
	I0409 01:14:24.380120    7488 round_trippers.go:587]     Audit-Id: 5d477377-182b-412c-b66a-436fbb744098
	I0409 01:14:24.380120    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:24.380120    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:24.380120    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:24.380120    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:24.380552    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:24.876183    7488 type.go:168] "Request Body" body=""
	I0409 01:14:24.876183    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:24.876183    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:24.876183    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:24.876183    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:24.885586    7488 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0409 01:14:24.885586    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:24.885586    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:24.885586    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:24.885586    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:24.885586    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:24 GMT
	I0409 01:14:24.885586    7488 round_trippers.go:587]     Audit-Id: 94086975-1f25-4412-80af-cdc55c26fb66
	I0409 01:14:24.885586    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:24.885977    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:25.375506    7488 type.go:168] "Request Body" body=""
	I0409 01:14:25.375506    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:25.375506    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:25.375506    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:25.375506    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:25.379480    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:25.379480    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:25.379480    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:25.379480    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:25.379480    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:25 GMT
	I0409 01:14:25.379480    7488 round_trippers.go:587]     Audit-Id: 0551d055-b670-41c8-92aa-76d189743da8
	I0409 01:14:25.379480    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:25.379480    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:25.380105    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:25.876064    7488 type.go:168] "Request Body" body=""
	I0409 01:14:25.876064    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:25.876064    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:25.876064    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:25.876064    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:25.880713    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:25.880713    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:25.880713    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:25.880786    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:25 GMT
	I0409 01:14:25.880786    7488 round_trippers.go:587]     Audit-Id: b27b1650-8b0e-4266-b1e6-18efc3e60cfc
	I0409 01:14:25.880786    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:25.880786    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:25.880786    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:25.881897    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:25.882307    7488 node_ready.go:53] node "multinode-611500" has status "Ready":"False"
	I0409 01:14:26.377014    7488 type.go:168] "Request Body" body=""
	I0409 01:14:26.377014    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:26.377014    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:26.377014    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:26.377014    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:26.381624    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:26.381624    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:26.381624    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:26.381624    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:26.381624    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:26.381624    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:26 GMT
	I0409 01:14:26.381624    7488 round_trippers.go:587]     Audit-Id: db931c8f-cc39-4187-834d-99316b10e1b3
	I0409 01:14:26.381624    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:26.382541    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:26.876463    7488 type.go:168] "Request Body" body=""
	I0409 01:14:26.877143    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:26.877143    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:26.877143    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:26.877143    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:26.881795    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:26.881795    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:26.881795    7488 round_trippers.go:587]     Audit-Id: 5213cf94-4a3b-4e7e-93a0-679113b15edf
	I0409 01:14:26.881795    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:26.881795    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:26.881795    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:26.881795    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:26.881795    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:26 GMT
	I0409 01:14:26.882369    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:27.375859    7488 type.go:168] "Request Body" body=""
	I0409 01:14:27.375859    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:27.375859    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:27.375859    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:27.375859    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:27.380876    7488 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0409 01:14:27.380876    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:27.380876    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:27.380876    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:27 GMT
	I0409 01:14:27.380876    7488 round_trippers.go:587]     Audit-Id: 68783c3e-b04b-429d-a668-f83c1081a1e0
	I0409 01:14:27.380876    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:27.380876    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:27.380876    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:27.381413    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:27.875958    7488 type.go:168] "Request Body" body=""
	I0409 01:14:27.875958    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:27.875958    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:27.875958    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:27.875958    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:27.880265    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:27.880418    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:27.880439    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:27 GMT
	I0409 01:14:27.880439    7488 round_trippers.go:587]     Audit-Id: 07d6adbd-aba1-4892-8e92-581c30fcc1a4
	I0409 01:14:27.880439    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:27.880439    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:27.880439    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:27.880439    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:27.880765    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:28.375532    7488 type.go:168] "Request Body" body=""
	I0409 01:14:28.376184    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:28.376252    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:28.376282    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:28.376282    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:28.381142    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:28.381199    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:28.381222    7488 round_trippers.go:587]     Audit-Id: b0853d67-9295-438f-ba1c-6010949a0021
	I0409 01:14:28.381222    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:28.381222    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:28.381222    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:28.381282    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:28.381282    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:28 GMT
	I0409 01:14:28.381381    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:28.381381    7488 node_ready.go:49] node "multinode-611500" has status "Ready":"True"
	I0409 01:14:28.381381    7488 node_ready.go:38] duration metric: took 12.0060864s for node "multinode-611500" to be "Ready" ...
	I0409 01:14:28.381381    7488 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0409 01:14:28.381919    7488 type.go:204] "Request Body" body=""
	I0409 01:14:28.381919    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods
	I0409 01:14:28.382032    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:28.382032    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:28.382068    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:28.386760    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:28.386760    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:28.386760    7488 round_trippers.go:587]     Audit-Id: 04f4efb4-d390-4732-9a48-eb11d9ca34dc
	I0409 01:14:28.386760    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:28.386760    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:28.386760    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:28.386760    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:28.386760    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:28 GMT
	I0409 01:14:28.388973    7488 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 db ee 03 0a  0a 0a 00 12 04 31 39 35  |ist..........195|
		00000020  39 1a 00 12 86 29 0a 99  19 0a 18 63 6f 72 65 64  |9....).....cored|
		00000030  6e 73 2d 36 36 38 64 36  62 66 39 62 63 2d 64 35  |ns-668d6bf9bc-d5|
		00000040  34 73 34 12 13 63 6f 72  65 64 6e 73 2d 36 36 38  |4s4..coredns-668|
		00000050  64 36 62 66 39 62 63 2d  1a 0b 6b 75 62 65 2d 73  |d6bf9bc-..kube-s|
		00000060  79 73 74 65 6d 22 00 2a  24 31 32 34 33 31 66 32  |ystem".*$12431f2|
		00000070  37 2d 37 65 34 65 2d 34  31 63 39 2d 38 64 35 34  |7-7e4e-41c9-8d54|
		00000080  2d 62 63 37 62 65 32 30  37 34 62 39 63 32 04 31  |-bc7be2074b9c2.1|
		00000090  38 35 39 38 00 42 08 08  96 88 d7 bf 06 10 00 5a  |8598.B.........Z|
		000000a0  13 0a 07 6b 38 73 2d 61  70 70 12 08 6b 75 62 65  |...k8s-app..kube|
		000000b0  2d 64 6e 73 5a 1f 0a 11  70 6f 64 2d 74 65 6d 70  |-dnsZ...pod-temp|
		000000c0  6c 61 74 65 2d 68 61 73  68 12 0a 36 36 38 64 36  |late-hash..668d [truncated 311806 chars]
	 >
	I0409 01:14:28.390554    7488 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-d54s4" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:28.390708    7488 type.go:168] "Request Body" body=""
	I0409 01:14:28.390774    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-d54s4
	I0409 01:14:28.390774    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:28.390816    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:28.390816    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:28.393521    7488 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 01:14:28.393521    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:28.393521    7488 round_trippers.go:587]     Audit-Id: 0f0e8574-9f44-43a3-a8db-5ac0372ec914
	I0409 01:14:28.393521    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:28.393521    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:28.393521    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:28.393521    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:28.393521    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:28 GMT
	I0409 01:14:28.393521    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  86 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 64 35 34 73 34 12  |68d6bf9bc-d54s4.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 31 32 34  33 31 66 32 37 2d 37 65  |m".*$12431f27-7e|
		00000060  34 65 2d 34 31 63 39 2d  38 64 35 34 2d 62 63 37  |4e-41c9-8d54-bc7|
		00000070  62 65 32 30 37 34 62 39  63 32 04 31 38 35 39 38  |be2074b9c2.18598|
		00000080  00 42 08 08 96 88 d7 bf  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25042 chars]
	 >
	I0409 01:14:28.394707    7488 type.go:168] "Request Body" body=""
	I0409 01:14:28.394707    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:28.394707    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:28.394860    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:28.394860    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:28.398572    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:28.398572    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:28.398572    7488 round_trippers.go:587]     Audit-Id: 289ef060-dbe6-4413-82fb-7eeedd979218
	I0409 01:14:28.398572    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:28.398572    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:28.398572    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:28.398572    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:28.398572    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:28 GMT
	I0409 01:14:28.398572    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:28.891199    7488 type.go:168] "Request Body" body=""
	I0409 01:14:28.891385    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-d54s4
	I0409 01:14:28.891385    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:28.891385    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:28.891385    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:28.898186    7488 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0409 01:14:28.898186    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:28.898186    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:28.898186    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:28 GMT
	I0409 01:14:28.898186    7488 round_trippers.go:587]     Audit-Id: 3836224a-7ba3-401a-9b4e-929a4538dc6e
	I0409 01:14:28.898186    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:28.898721    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:28.898721    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:28.898973    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  86 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 64 35 34 73 34 12  |68d6bf9bc-d54s4.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 31 32 34  33 31 66 32 37 2d 37 65  |m".*$12431f27-7e|
		00000060  34 65 2d 34 31 63 39 2d  38 64 35 34 2d 62 63 37  |4e-41c9-8d54-bc7|
		00000070  62 65 32 30 37 34 62 39  63 32 04 31 38 35 39 38  |be2074b9c2.18598|
		00000080  00 42 08 08 96 88 d7 bf  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25042 chars]
	 >
	I0409 01:14:28.898973    7488 type.go:168] "Request Body" body=""
	I0409 01:14:28.898973    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:28.898973    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:28.898973    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:28.899578    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:28.902952    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:28.903952    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:28.903952    7488 round_trippers.go:587]     Audit-Id: 1732fa1b-762c-4a10-a73f-97b148c81258
	I0409 01:14:28.903952    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:28.903952    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:28.903952    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:28.903952    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:28.903952    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:28 GMT
	I0409 01:14:28.903952    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:29.391284    7488 type.go:168] "Request Body" body=""
	I0409 01:14:29.392275    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-d54s4
	I0409 01:14:29.392275    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:29.392275    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:29.392418    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:29.398539    7488 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0409 01:14:29.398675    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:29.398675    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:29.398675    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:29 GMT
	I0409 01:14:29.398675    7488 round_trippers.go:587]     Audit-Id: 084a06a0-502c-46e1-9f87-a24fa3b27639
	I0409 01:14:29.398675    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:29.398675    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:29.398675    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:29.399267    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  86 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 64 35 34 73 34 12  |68d6bf9bc-d54s4.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 31 32 34  33 31 66 32 37 2d 37 65  |m".*$12431f27-7e|
		00000060  34 65 2d 34 31 63 39 2d  38 64 35 34 2d 62 63 37  |4e-41c9-8d54-bc7|
		00000070  62 65 32 30 37 34 62 39  63 32 04 31 38 35 39 38  |be2074b9c2.18598|
		00000080  00 42 08 08 96 88 d7 bf  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25042 chars]
	 >
	I0409 01:14:29.399509    7488 type.go:168] "Request Body" body=""
	I0409 01:14:29.399509    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:29.399509    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:29.399509    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:29.399509    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:29.401949    7488 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 01:14:29.401949    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:29.401949    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:29 GMT
	I0409 01:14:29.401949    7488 round_trippers.go:587]     Audit-Id: 19ea9810-aae3-42f0-9d68-5b0e443b7199
	I0409 01:14:29.401949    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:29.401949    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:29.401949    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:29.401949    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:29.403045    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:29.892182    7488 type.go:168] "Request Body" body=""
	I0409 01:14:29.892182    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-d54s4
	I0409 01:14:29.892182    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:29.892182    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:29.892182    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:29.895964    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:29.895964    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:29.895964    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:29.895964    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:29.895964    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:29.895964    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:29 GMT
	I0409 01:14:29.895964    7488 round_trippers.go:587]     Audit-Id: 99f50467-2af0-4b65-b7f7-71303fb4b702
	I0409 01:14:29.895964    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:29.895964    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  86 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 64 35 34 73 34 12  |68d6bf9bc-d54s4.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 31 32 34  33 31 66 32 37 2d 37 65  |m".*$12431f27-7e|
		00000060  34 65 2d 34 31 63 39 2d  38 64 35 34 2d 62 63 37  |4e-41c9-8d54-bc7|
		00000070  62 65 32 30 37 34 62 39  63 32 04 31 38 35 39 38  |be2074b9c2.18598|
		00000080  00 42 08 08 96 88 d7 bf  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25042 chars]
	 >
	I0409 01:14:29.896845    7488 type.go:168] "Request Body" body=""
	I0409 01:14:29.896899    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:29.896899    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:29.896899    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:29.896899    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:29.899759    7488 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 01:14:29.899863    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:29.899902    7488 round_trippers.go:587]     Audit-Id: df33a631-e5f4-4aa6-a163-a5213fcbfd56
	I0409 01:14:29.899902    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:29.899902    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:29.899902    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:29.899902    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:29.899902    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:29 GMT
	I0409 01:14:29.900213    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:30.391489    7488 type.go:168] "Request Body" body=""
	I0409 01:14:30.391489    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-d54s4
	I0409 01:14:30.391489    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:30.391489    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:30.391489    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:30.396311    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:30.396311    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:30.396448    7488 round_trippers.go:587]     Audit-Id: c34e0e79-0ff8-4803-9c66-0cfc740158d6
	I0409 01:14:30.396473    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:30.396473    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:30.396473    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:30.396473    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:30.396473    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:30 GMT
	I0409 01:14:30.396665    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  86 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 64 35 34 73 34 12  |68d6bf9bc-d54s4.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 31 32 34  33 31 66 32 37 2d 37 65  |m".*$12431f27-7e|
		00000060  34 65 2d 34 31 63 39 2d  38 64 35 34 2d 62 63 37  |4e-41c9-8d54-bc7|
		00000070  62 65 32 30 37 34 62 39  63 32 04 31 38 35 39 38  |be2074b9c2.18598|
		00000080  00 42 08 08 96 88 d7 bf  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25042 chars]
	 >
	I0409 01:14:30.396665    7488 type.go:168] "Request Body" body=""
	I0409 01:14:30.396665    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:30.396665    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:30.396665    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:30.396665    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:30.399865    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:30.399945    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:30.399945    7488 round_trippers.go:587]     Audit-Id: d5678d61-39a5-4a06-bdba-26f94c7b8ca0
	I0409 01:14:30.399945    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:30.399945    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:30.399945    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:30.399945    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:30.399945    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:30 GMT
	I0409 01:14:30.401119    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:30.401330    7488 pod_ready.go:103] pod "coredns-668d6bf9bc-d54s4" in "kube-system" namespace has status "Ready":"False"
	I0409 01:14:30.890758    7488 type.go:168] "Request Body" body=""
	I0409 01:14:30.890758    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-d54s4
	I0409 01:14:30.890758    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:30.890758    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:30.890758    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:30.894765    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:30.894765    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:30.894765    7488 round_trippers.go:587]     Audit-Id: ede2eb51-16e8-4ee3-9a9a-a9d13afa88ca
	I0409 01:14:30.894765    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:30.894765    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:30.894765    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:30.894765    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:30.894765    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:30 GMT
	I0409 01:14:30.895757    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  86 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 64 35 34 73 34 12  |68d6bf9bc-d54s4.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 31 32 34  33 31 66 32 37 2d 37 65  |m".*$12431f27-7e|
		00000060  34 65 2d 34 31 63 39 2d  38 64 35 34 2d 62 63 37  |4e-41c9-8d54-bc7|
		00000070  62 65 32 30 37 34 62 39  63 32 04 31 38 35 39 38  |be2074b9c2.18598|
		00000080  00 42 08 08 96 88 d7 bf  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25042 chars]
	 >
	I0409 01:14:30.895757    7488 type.go:168] "Request Body" body=""
	I0409 01:14:30.895757    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:30.895757    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:30.895757    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:30.895757    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:30.898777    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:30.898777    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:30.898777    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:30.898777    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:30 GMT
	I0409 01:14:30.898777    7488 round_trippers.go:587]     Audit-Id: 4176e11a-6b50-4bab-9a15-917e42a3ebd6
	I0409 01:14:30.898777    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:30.898777    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:30.898777    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:30.899757    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:31.391403    7488 type.go:168] "Request Body" body=""
	I0409 01:14:31.391403    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-d54s4
	I0409 01:14:31.391403    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:31.391403    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:31.391403    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:31.398287    7488 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0409 01:14:31.398356    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:31.398356    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:31.398434    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:31.398434    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:31 GMT
	I0409 01:14:31.398434    7488 round_trippers.go:587]     Audit-Id: e14b7521-93b9-4f36-ab46-bea874f56067
	I0409 01:14:31.398434    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:31.398434    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:31.398434    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  86 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 64 35 34 73 34 12  |68d6bf9bc-d54s4.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 31 32 34  33 31 66 32 37 2d 37 65  |m".*$12431f27-7e|
		00000060  34 65 2d 34 31 63 39 2d  38 64 35 34 2d 62 63 37  |4e-41c9-8d54-bc7|
		00000070  62 65 32 30 37 34 62 39  63 32 04 31 38 35 39 38  |be2074b9c2.18598|
		00000080  00 42 08 08 96 88 d7 bf  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25042 chars]
	 >
	I0409 01:14:31.399210    7488 type.go:168] "Request Body" body=""
	I0409 01:14:31.399327    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:31.399381    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:31.399381    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:31.399381    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:31.405016    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:31.405098    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:31.405098    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:31.405098    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:31 GMT
	I0409 01:14:31.405098    7488 round_trippers.go:587]     Audit-Id: 6f56fd36-d522-4540-8719-f9524d02f8cf
	I0409 01:14:31.405098    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:31.405175    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:31.405175    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:31.405510    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:31.891283    7488 type.go:168] "Request Body" body=""
	I0409 01:14:31.891283    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-d54s4
	I0409 01:14:31.891283    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:31.891283    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:31.891283    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:31.895851    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:31.895922    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:31.895922    7488 round_trippers.go:587]     Audit-Id: 2e613601-421b-42bd-b539-afe03d13c444
	I0409 01:14:31.895922    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:31.896013    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:31.896013    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:31.896013    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:31.896013    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:31 GMT
	I0409 01:14:31.896454    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  86 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 64 35 34 73 34 12  |68d6bf9bc-d54s4.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 31 32 34  33 31 66 32 37 2d 37 65  |m".*$12431f27-7e|
		00000060  34 65 2d 34 31 63 39 2d  38 64 35 34 2d 62 63 37  |4e-41c9-8d54-bc7|
		00000070  62 65 32 30 37 34 62 39  63 32 04 31 38 35 39 38  |be2074b9c2.18598|
		00000080  00 42 08 08 96 88 d7 bf  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25042 chars]
	 >
	I0409 01:14:31.896766    7488 type.go:168] "Request Body" body=""
	I0409 01:14:31.896902    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:31.896929    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:31.896929    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:31.896929    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:31.899757    7488 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 01:14:31.899847    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:31.899847    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:31.899847    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:31.899847    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:31.899847    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:31.899847    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:31 GMT
	I0409 01:14:31.899847    7488 round_trippers.go:587]     Audit-Id: 4ef062de-f7dd-47b0-85eb-056075b84bcd
	I0409 01:14:31.900141    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:32.391371    7488 type.go:168] "Request Body" body=""
	I0409 01:14:32.391371    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-d54s4
	I0409 01:14:32.391371    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:32.391371    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:32.391371    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:32.396364    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:32.396364    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:32.396364    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:32.396364    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:32.396364    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:32.396364    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:32.396364    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:32 GMT
	I0409 01:14:32.396364    7488 round_trippers.go:587]     Audit-Id: c1517004-eff2-42db-b483-55c12e64abc7
	I0409 01:14:32.396364    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  c7 28 0a af 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.(.....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 64 35 34 73 34 12  |68d6bf9bc-d54s4.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 31 32 34  33 31 66 32 37 2d 37 65  |m".*$12431f27-7e|
		00000060  34 65 2d 34 31 63 39 2d  38 64 35 34 2d 62 63 37  |4e-41c9-8d54-bc7|
		00000070  62 65 32 30 37 34 62 39  63 32 04 31 39 37 36 38  |be2074b9c2.19768|
		00000080  00 42 08 08 96 88 d7 bf  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 24727 chars]
	 >
	I0409 01:14:32.396364    7488 type.go:168] "Request Body" body=""
	I0409 01:14:32.396364    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:32.396364    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:32.396364    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:32.396364    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:32.402366    7488 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0409 01:14:32.402366    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:32.402877    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:32.402877    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:32 GMT
	I0409 01:14:32.402877    7488 round_trippers.go:587]     Audit-Id: 6d530187-f48a-482b-9c50-4006f3a3fdee
	I0409 01:14:32.402877    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:32.402877    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:32.402877    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:32.403439    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:32.403439    7488 pod_ready.go:93] pod "coredns-668d6bf9bc-d54s4" in "kube-system" namespace has status "Ready":"True"
	I0409 01:14:32.403439    7488 pod_ready.go:82] duration metric: took 4.0127507s for pod "coredns-668d6bf9bc-d54s4" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:32.403439    7488 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-611500" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:32.403439    7488 type.go:168] "Request Body" body=""
	I0409 01:14:32.403439    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-611500
	I0409 01:14:32.403439    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:32.403439    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:32.403439    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:32.406834    7488 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 01:14:32.406834    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:32.406834    7488 round_trippers.go:587]     Audit-Id: b8d7a01e-0602-444d-a1fc-258b1f888a39
	I0409 01:14:32.406834    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:32.406834    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:32.406834    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:32.406834    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:32.406834    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:32 GMT
	I0409 01:14:32.406834    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  8c 2c 0a a1 1a 0a 15 65  74 63 64 2d 6d 75 6c 74  |.,.....etcd-mult|
		00000020  69 6e 6f 64 65 2d 36 31  31 35 30 30 12 00 1a 0b  |inode-611500....|
		00000030  6b 75 62 65 2d 73 79 73  74 65 6d 22 00 2a 24 65  |kube-system".*$e|
		00000040  36 62 33 39 62 31 61 2d  61 36 64 35 2d 34 36 64  |6b39b1a-a6d5-46d|
		00000050  31 2d 61 35 36 61 2d 32  34 33 63 39 62 62 36 66  |1-a56a-243c9bb6f|
		00000060  35 36 33 32 04 31 39 34  39 38 00 42 08 08 e6 93  |5632.19498.B....|
		00000070  d7 bf 06 10 00 5a 11 0a  09 63 6f 6d 70 6f 6e 65  |.....Z...compone|
		00000080  6e 74 12 04 65 74 63 64  5a 15 0a 04 74 69 65 72  |nt..etcdZ...tier|
		00000090  12 0d 63 6f 6e 74 72 6f  6c 2d 70 6c 61 6e 65 62  |..control-planeb|
		000000a0  50 0a 30 6b 75 62 65 61  64 6d 2e 6b 75 62 65 72  |P.0kubeadm.kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 65 74 63 64 2e 61 64  |netes.io/etcd.ad|
		000000c0  76 65 72 74 69 73 65 2d  63 6c 69 65 6e 74 2d 75  |vertise-client- [truncated 27007 chars]
	 >
	I0409 01:14:32.407514    7488 type.go:168] "Request Body" body=""
	I0409 01:14:32.407540    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:32.407597    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:32.407597    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:32.407597    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:32.409506    7488 round_trippers.go:581] Response Status: 200 OK in 1 milliseconds
	I0409 01:14:32.409506    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:32.409506    7488 round_trippers.go:587]     Audit-Id: cc0831f9-8cdf-4941-8476-0aa607c5648b
	I0409 01:14:32.409506    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:32.409506    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:32.409506    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:32.409506    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:32.409506    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:32 GMT
	I0409 01:14:32.409506    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:32.410137    7488 pod_ready.go:93] pod "etcd-multinode-611500" in "kube-system" namespace has status "Ready":"True"
	I0409 01:14:32.410137    7488 pod_ready.go:82] duration metric: took 6.6976ms for pod "etcd-multinode-611500" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:32.410233    7488 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-611500" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:32.410335    7488 type.go:168] "Request Body" body=""
	I0409 01:14:32.410360    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-611500
	I0409 01:14:32.410443    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:32.410443    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:32.410443    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:32.413957    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:32.413957    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:32.413957    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:32.413957    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:32.413957    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:32.413957    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:32.413957    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:32 GMT
	I0409 01:14:32.413957    7488 round_trippers.go:587]     Audit-Id: ae104c88-34aa-4a7e-9b1b-c0ad61a1374d
	I0409 01:14:32.414624    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  a8 36 0a b1 1c 0a 1f 6b  75 62 65 2d 61 70 69 73  |.6.....kube-apis|
		00000020  65 72 76 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |erver-multinode-|
		00000030  36 31 31 35 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |611500....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 66 39 39 32 34 37 35  |ystem".*$f992475|
		00000050  34 2d 66 38 63 35 2d 34  61 38 62 2d 39 64 61 32  |4-f8c5-4a8b-9da2|
		00000060  2d 32 33 64 38 30 39 36  61 35 65 63 66 32 04 31  |-23d8096a5ecf2.1|
		00000070  39 34 31 38 00 42 08 08  e6 93 d7 bf 06 10 00 5a  |9418.B.........Z|
		00000080  1b 0a 09 63 6f 6d 70 6f  6e 65 6e 74 12 0e 6b 75  |...component..ku|
		00000090  62 65 2d 61 70 69 73 65  72 76 65 72 5a 15 0a 04  |be-apiserverZ...|
		000000a0  74 69 65 72 12 0d 63 6f  6e 74 72 6f 6c 2d 70 6c  |tier..control-pl|
		000000b0  61 6e 65 62 57 0a 3f 6b  75 62 65 61 64 6d 2e 6b  |anebW.?kubeadm.k|
		000000c0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 6b 75 62  |ubernetes.io/ku [truncated 33418 chars]
	 >
	I0409 01:14:32.414808    7488 type.go:168] "Request Body" body=""
	I0409 01:14:32.414808    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:32.414808    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:32.414808    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:32.414808    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:32.417983    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:32.418034    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:32.418034    7488 round_trippers.go:587]     Audit-Id: 21c324f1-9bab-44d6-8694-3e92661e929f
	I0409 01:14:32.418077    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:32.418100    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:32.418125    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:32.418218    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:32.418395    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:32 GMT
	I0409 01:14:32.418511    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:32.418511    7488 pod_ready.go:93] pod "kube-apiserver-multinode-611500" in "kube-system" namespace has status "Ready":"True"
	I0409 01:14:32.418511    7488 pod_ready.go:82] duration metric: took 8.2419ms for pod "kube-apiserver-multinode-611500" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:32.418511    7488 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-611500" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:32.418511    7488 type.go:168] "Request Body" body=""
	I0409 01:14:32.419039    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:32.419102    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:32.419102    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:32.419102    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:32.422171    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:32.422171    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:32.422171    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:32.422171    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:32 GMT
	I0409 01:14:32.422171    7488 round_trippers.go:587]     Audit-Id: 670f263b-6be4-4125-b69b-34055ae2c84c
	I0409 01:14:32.422171    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:32.422171    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:32.422171    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:32.422171    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:32.422171    7488 type.go:168] "Request Body" body=""
	I0409 01:14:32.422171    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:32.422171    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:32.422171    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:32.422171    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:32.429096    7488 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0409 01:14:32.430116    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:32.430116    7488 round_trippers.go:587]     Audit-Id: 8c3f3a5c-2c59-4e30-837f-25dd36087c03
	I0409 01:14:32.430116    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:32.430116    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:32.430116    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:32.430116    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:32.430116    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:32 GMT
	I0409 01:14:32.430116    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:32.919082    7488 type.go:168] "Request Body" body=""
	I0409 01:14:32.919082    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:32.919082    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:32.919082    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:32.919082    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:32.923585    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:32.923667    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:32.923667    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:32.923667    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:32.923667    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:32 GMT
	I0409 01:14:32.923667    7488 round_trippers.go:587]     Audit-Id: 962775c5-3cdb-49e6-855f-4dcb9e551ab6
	I0409 01:14:32.923667    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:32.923667    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:32.924265    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:32.924672    7488 type.go:168] "Request Body" body=""
	I0409 01:14:32.924672    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:32.924672    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:32.924672    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:32.924672    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:32.927245    7488 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 01:14:32.927245    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:32.927245    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:32 GMT
	I0409 01:14:32.927245    7488 round_trippers.go:587]     Audit-Id: 39646721-cffc-457e-ace4-1f5cca8e1b17
	I0409 01:14:32.927245    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:32.927245    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:32.927245    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:32.927245    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:32.932649    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:33.419188    7488 type.go:168] "Request Body" body=""
	I0409 01:14:33.419188    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:33.419188    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:33.419188    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:33.419188    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:33.423081    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:33.423152    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:33.423152    7488 round_trippers.go:587]     Audit-Id: 078159cb-c7f7-4634-a520-538ff89e63a2
	I0409 01:14:33.423152    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:33.423152    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:33.423152    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:33.423152    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:33.423152    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:33 GMT
	I0409 01:14:33.423557    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:33.423918    7488 type.go:168] "Request Body" body=""
	I0409 01:14:33.423971    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:33.423971    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:33.424029    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:33.424029    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:33.427380    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:33.427467    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:33.427542    7488 round_trippers.go:587]     Audit-Id: dae36050-4a9b-4e57-92a8-bcdf8f5a25d5
	I0409 01:14:33.427542    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:33.427542    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:33.427542    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:33.427542    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:33.427542    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:33 GMT
	I0409 01:14:33.427599    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:33.919535    7488 type.go:168] "Request Body" body=""
	I0409 01:14:33.919535    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:33.920149    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:33.920149    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:33.920149    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:33.924949    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:33.925050    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:33.925050    7488 round_trippers.go:587]     Audit-Id: 3403a0fe-7d52-41b3-9498-1a8206ef33b2
	I0409 01:14:33.925050    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:33.925050    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:33.925050    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:33.925050    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:33.925108    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:33 GMT
	I0409 01:14:33.925520    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:33.925724    7488 type.go:168] "Request Body" body=""
	I0409 01:14:33.925724    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:33.925724    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:33.925724    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:33.925724    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:33.929401    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:33.929492    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:33.929492    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:33.929492    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:33.929492    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:33.929492    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:33.929492    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:33 GMT
	I0409 01:14:33.929492    7488 round_trippers.go:587]     Audit-Id: b07a4e6b-2fc3-4689-905e-8b2b706d4788
	I0409 01:14:33.929642    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:34.419230    7488 type.go:168] "Request Body" body=""
	I0409 01:14:34.419230    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:34.419230    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:34.419230    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:34.419230    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:34.423424    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:34.423499    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:34.423499    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:34.423499    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:34 GMT
	I0409 01:14:34.423499    7488 round_trippers.go:587]     Audit-Id: 97d7413a-630e-489e-aff4-701df6dfbf3b
	I0409 01:14:34.423499    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:34.423499    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:34.423499    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:34.423880    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:34.424031    7488 type.go:168] "Request Body" body=""
	I0409 01:14:34.424031    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:34.424031    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:34.424031    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:34.424031    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:34.429424    7488 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0409 01:14:34.429424    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:34.429424    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:34.429424    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:34.429424    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:34 GMT
	I0409 01:14:34.429424    7488 round_trippers.go:587]     Audit-Id: a9296c7b-0384-4e00-8304-c02fc9a82168
	I0409 01:14:34.429424    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:34.429424    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:34.430179    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:34.430379    7488 pod_ready.go:103] pod "kube-controller-manager-multinode-611500" in "kube-system" namespace has status "Ready":"False"
	I0409 01:14:34.919217    7488 type.go:168] "Request Body" body=""
	I0409 01:14:34.919217    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:34.919217    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:34.919217    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:34.919217    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:34.924490    7488 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0409 01:14:34.924490    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:34.924490    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:34.924490    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:34.924490    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:34 GMT
	I0409 01:14:34.924490    7488 round_trippers.go:587]     Audit-Id: bf703a23-0194-4295-b5a7-84ba94a961c1
	I0409 01:14:34.924490    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:34.924490    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:34.925343    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:34.925593    7488 type.go:168] "Request Body" body=""
	I0409 01:14:34.925667    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:34.925728    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:34.925748    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:34.925748    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:34.928112    7488 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 01:14:34.928112    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:34.928500    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:34.928500    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:34.928500    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:34.928500    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:34 GMT
	I0409 01:14:34.928500    7488 round_trippers.go:587]     Audit-Id: 76046b8b-fb86-4cd7-a662-46eb992451b7
	I0409 01:14:34.928500    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:34.928709    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:35.418954    7488 type.go:168] "Request Body" body=""
	I0409 01:14:35.418954    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:35.418954    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:35.418954    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:35.418954    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:35.423657    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:35.423766    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:35.423766    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:35.423766    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:35.423766    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:35.423766    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:35 GMT
	I0409 01:14:35.423766    7488 round_trippers.go:587]     Audit-Id: 2cc48a53-e9de-4741-a06b-105b349fb29f
	I0409 01:14:35.423766    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:35.424736    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:35.424974    7488 type.go:168] "Request Body" body=""
	I0409 01:14:35.424974    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:35.424974    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:35.424974    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:35.424974    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:35.427798    7488 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 01:14:35.427798    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:35.428248    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:35.428248    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:35.428248    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:35.428248    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:35 GMT
	I0409 01:14:35.428248    7488 round_trippers.go:587]     Audit-Id: 8943f520-268a-4de8-8464-c0aa1162a31b
	I0409 01:14:35.428248    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:35.428550    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:35.919091    7488 type.go:168] "Request Body" body=""
	I0409 01:14:35.919091    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:35.919091    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:35.919091    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:35.919091    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:35.923777    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:35.923777    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:35.923861    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:35.923861    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:35.923861    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:35.923861    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:35 GMT
	I0409 01:14:35.923861    7488 round_trippers.go:587]     Audit-Id: 2c9eb9c9-f247-400d-8b03-e051159a48cb
	I0409 01:14:35.923861    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:35.924510    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:35.924865    7488 type.go:168] "Request Body" body=""
	I0409 01:14:35.924865    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:35.924946    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:35.924946    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:35.924946    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:35.928711    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:35.928841    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:35.928910    7488 round_trippers.go:587]     Audit-Id: 61aa530f-353d-4a4e-ad35-5a6a18611261
	I0409 01:14:35.928910    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:35.928910    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:35.928910    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:35.928970    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:35.928970    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:35 GMT
	I0409 01:14:35.929216    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:36.419859    7488 type.go:168] "Request Body" body=""
	I0409 01:14:36.419963    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:36.419963    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:36.419963    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:36.420113    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:36.424738    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:36.424738    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:36.424794    7488 round_trippers.go:587]     Audit-Id: 11cf8e1d-ddf2-48b2-94ca-7e94145c59c3
	I0409 01:14:36.424794    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:36.424794    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:36.424794    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:36.424794    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:36.424794    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:36 GMT
	I0409 01:14:36.425266    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:36.425511    7488 type.go:168] "Request Body" body=""
	I0409 01:14:36.425511    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:36.425511    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:36.425511    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:36.425511    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:36.428660    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:36.428660    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:36.428660    7488 round_trippers.go:587]     Audit-Id: 7d6fd480-cc38-4d82-8966-42522a375db8
	I0409 01:14:36.428756    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:36.428756    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:36.428756    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:36.428756    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:36.428756    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:36 GMT
	I0409 01:14:36.429144    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:36.923871    7488 type.go:168] "Request Body" body=""
	I0409 01:14:36.924173    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:36.924173    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:36.924230    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:36.924230    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:36.926122    7488 round_trippers.go:581] Response Status: 200 OK in 1 milliseconds
	I0409 01:14:36.926122    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:36.926122    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:36 GMT
	I0409 01:14:36.926122    7488 round_trippers.go:587]     Audit-Id: a3a666c8-467f-4819-9810-b18125e83ac7
	I0409 01:14:36.926122    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:36.926122    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:36.926122    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:36.926122    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:36.926122    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:36.926122    7488 type.go:168] "Request Body" body=""
	I0409 01:14:36.926122    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:36.926122    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:36.926122    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:36.926122    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:36.934759    7488 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0409 01:14:36.934759    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:36.934759    7488 round_trippers.go:587]     Audit-Id: 04627fa6-cd1f-4487-85ed-557cd328d104
	I0409 01:14:36.934759    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:36.934759    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:36.934759    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:36.934759    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:36.934759    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:36 GMT
	I0409 01:14:36.934759    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:36.935314    7488 pod_ready.go:103] pod "kube-controller-manager-multinode-611500" in "kube-system" namespace has status "Ready":"False"
	I0409 01:14:37.419170    7488 type.go:168] "Request Body" body=""
	I0409 01:14:37.419170    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:37.419170    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:37.419170    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:37.419170    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:37.427982    7488 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0409 01:14:37.427982    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:37.427982    7488 round_trippers.go:587]     Audit-Id: 1ab6f036-148a-4662-bdd9-f0f87b3098b1
	I0409 01:14:37.427982    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:37.427982    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:37.427982    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:37.427982    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:37.427982    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:37 GMT
	I0409 01:14:37.429265    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:37.429433    7488 type.go:168] "Request Body" body=""
	I0409 01:14:37.429433    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:37.429433    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:37.429433    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:37.429433    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:37.432765    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:37.432765    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:37.432765    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:37 GMT
	I0409 01:14:37.432765    7488 round_trippers.go:587]     Audit-Id: 55e55312-ff41-4169-8158-e8b8ee91c920
	I0409 01:14:37.432765    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:37.432765    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:37.432765    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:37.432765    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:37.433469    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:37.919140    7488 type.go:168] "Request Body" body=""
	I0409 01:14:37.919140    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:37.919140    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:37.919140    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:37.919140    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:37.924094    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:37.924094    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:37.924196    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:37.924196    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:37.924196    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:37 GMT
	I0409 01:14:37.924233    7488 round_trippers.go:587]     Audit-Id: ffa81669-e661-4597-b247-7efb80ea595f
	I0409 01:14:37.924233    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:37.924233    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:37.924648    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:37.925009    7488 type.go:168] "Request Body" body=""
	I0409 01:14:37.925082    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:37.925168    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:37.925168    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:37.925198    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:37.928890    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:37.928890    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:37.929113    7488 round_trippers.go:587]     Audit-Id: 6e299306-b7b0-47c2-af66-30b46c1a40fb
	I0409 01:14:37.929113    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:37.929113    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:37.929113    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:37.929113    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:37.929113    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:37 GMT
	I0409 01:14:37.929113    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:38.420621    7488 type.go:168] "Request Body" body=""
	I0409 01:14:38.420732    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:38.420802    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:38.420802    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:38.420802    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:38.428413    7488 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0409 01:14:38.428507    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:38.428558    7488 round_trippers.go:587]     Audit-Id: a8271f83-ec34-4c62-9c6c-ef95332d1aa0
	I0409 01:14:38.428558    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:38.428558    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:38.428558    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:38.428558    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:38.428558    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:38 GMT
	I0409 01:14:38.429198    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:38.429621    7488 type.go:168] "Request Body" body=""
	I0409 01:14:38.429693    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:38.429693    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:38.429693    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:38.429693    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:38.432274    7488 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 01:14:38.432274    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:38.432274    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:38.432274    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:38 GMT
	I0409 01:14:38.432274    7488 round_trippers.go:587]     Audit-Id: a843f51e-1c95-4132-9b82-f76fe5c28727
	I0409 01:14:38.432274    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:38.432274    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:38.432274    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:38.432274    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:38.919551    7488 type.go:168] "Request Body" body=""
	I0409 01:14:38.919551    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:38.919551    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:38.919551    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:38.919551    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:38.925823    7488 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0409 01:14:38.925823    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:38.925823    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:38.925823    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:38.925823    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:38 GMT
	I0409 01:14:38.925823    7488 round_trippers.go:587]     Audit-Id: 20a82344-dd93-4cfd-a74e-2935de0c6c74
	I0409 01:14:38.925823    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:38.925823    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:38.926509    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:38.926759    7488 type.go:168] "Request Body" body=""
	I0409 01:14:38.926839    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:38.926839    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:38.926839    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:38.926839    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:38.929525    7488 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 01:14:38.929525    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:38.929525    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:38.929525    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:38.929525    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:38.929525    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:38 GMT
	I0409 01:14:38.929525    7488 round_trippers.go:587]     Audit-Id: 4059015e-45e4-44d2-932d-413b50e923d3
	I0409 01:14:38.929525    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:38.929525    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:39.419121    7488 type.go:168] "Request Body" body=""
	I0409 01:14:39.419121    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:39.419121    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:39.419121    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:39.419121    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:39.423755    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:39.423817    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:39.423817    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:39.423817    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:39.423817    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:39.423817    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:39 GMT
	I0409 01:14:39.423817    7488 round_trippers.go:587]     Audit-Id: 05b993c6-cc1b-4241-bd28-58c3be21f462
	I0409 01:14:39.423817    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:39.423817    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:39.424479    7488 type.go:168] "Request Body" body=""
	I0409 01:14:39.424571    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:39.424636    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:39.424743    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:39.424765    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:39.427419    7488 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 01:14:39.427602    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:39.427602    7488 round_trippers.go:587]     Audit-Id: f20a4c2a-8e38-4743-9008-99e62a051fc1
	I0409 01:14:39.427602    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:39.427602    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:39.427602    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:39.427602    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:39.427602    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:39 GMT
	I0409 01:14:39.427789    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:39.427789    7488 pod_ready.go:103] pod "kube-controller-manager-multinode-611500" in "kube-system" namespace has status "Ready":"False"
	I0409 01:14:39.919417    7488 type.go:168] "Request Body" body=""
	I0409 01:14:39.919939    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:39.919939    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:39.919939    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:39.919939    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:39.924015    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:39.924096    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:39.924096    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:39.924096    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:39 GMT
	I0409 01:14:39.924096    7488 round_trippers.go:587]     Audit-Id: b230bb0d-3b66-4857-be8e-de25734a32aa
	I0409 01:14:39.924096    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:39.924096    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:39.924096    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:39.924558    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:39.924932    7488 type.go:168] "Request Body" body=""
	I0409 01:14:39.924991    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:39.924991    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:39.924991    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:39.924991    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:39.927740    7488 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 01:14:39.927740    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:39.928438    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:39.928438    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:39 GMT
	I0409 01:14:39.928438    7488 round_trippers.go:587]     Audit-Id: 05d24e90-6fe9-479b-bb02-fa3a7d6a6092
	I0409 01:14:39.928438    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:39.928438    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:39.928438    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:39.928761    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:40.419147    7488 type.go:168] "Request Body" body=""
	I0409 01:14:40.419147    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:40.419147    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:40.419147    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:40.419147    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:40.423552    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:40.423552    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:40.423552    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:40.423659    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:40.423659    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:40.423659    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:40.423659    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:40 GMT
	I0409 01:14:40.423731    7488 round_trippers.go:587]     Audit-Id: d6f08b33-6e06-4b8f-ae44-18511b9a99fb
	I0409 01:14:40.423987    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:40.424230    7488 type.go:168] "Request Body" body=""
	I0409 01:14:40.424338    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:40.424338    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:40.424338    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:40.424338    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:40.426719    7488 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 01:14:40.426832    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:40.426832    7488 round_trippers.go:587]     Audit-Id: 682e451c-41fa-4fd1-9e85-28baf5d12014
	I0409 01:14:40.426832    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:40.426832    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:40.426832    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:40.426832    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:40.426832    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:40 GMT
	I0409 01:14:40.427234    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:40.918774    7488 type.go:168] "Request Body" body=""
	I0409 01:14:40.918774    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:40.918774    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:40.918774    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:40.918774    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:40.923763    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:40.923763    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:40.923763    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:40.923763    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:40 GMT
	I0409 01:14:40.923763    7488 round_trippers.go:587]     Audit-Id: 763ce7d5-45cb-4e11-8bd3-fdbb0518d83d
	I0409 01:14:40.923763    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:40.923763    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:40.923763    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:40.924418    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:40.924784    7488 type.go:168] "Request Body" body=""
	I0409 01:14:40.924947    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:40.924947    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:40.924947    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:40.924947    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:40.927293    7488 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 01:14:40.927293    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:40.927293    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:40.927293    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:40.927293    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:40.927293    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:40 GMT
	I0409 01:14:40.927293    7488 round_trippers.go:587]     Audit-Id: b5bfa7a4-77dc-4bcd-819a-3bf36e285e05
	I0409 01:14:40.927293    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:40.928686    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:41.418845    7488 type.go:168] "Request Body" body=""
	I0409 01:14:41.418845    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:41.418845    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:41.418845    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:41.418845    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:41.426056    7488 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0409 01:14:41.426056    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:41.426056    7488 round_trippers.go:587]     Audit-Id: 117217a5-8245-4e5b-a122-9cc38dc6aca8
	I0409 01:14:41.426056    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:41.426056    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:41.426056    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:41.426056    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:41.426056    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:41 GMT
	I0409 01:14:41.426778    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:41.426778    7488 type.go:168] "Request Body" body=""
	I0409 01:14:41.426778    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:41.426778    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:41.426778    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:41.426778    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:41.431090    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:41.431748    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:41.431748    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:41.431748    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:41.431748    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:41.431748    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:41.431748    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:41 GMT
	I0409 01:14:41.431824    7488 round_trippers.go:587]     Audit-Id: 71cb979d-a7eb-4ec1-a3ad-e6e5139d4d50
	I0409 01:14:41.432180    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:41.432297    7488 pod_ready.go:103] pod "kube-controller-manager-multinode-611500" in "kube-system" namespace has status "Ready":"False"
	I0409 01:14:41.919077    7488 type.go:168] "Request Body" body=""
	I0409 01:14:41.919077    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:41.919077    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:41.919077    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:41.919077    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:41.923056    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:41.924048    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:41.924048    7488 round_trippers.go:587]     Audit-Id: 4925a799-451f-4d81-bd04-abd243886971
	I0409 01:14:41.924048    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:41.924048    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:41.924048    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:41.924048    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:41.924048    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:41 GMT
	I0409 01:14:41.924048    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:41.924048    7488 type.go:168] "Request Body" body=""
	I0409 01:14:41.924048    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:41.924048    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:41.924048    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:41.924048    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:41.928532    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:41.928646    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:41.928646    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:41.928646    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:41.928697    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:41.928697    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:41.928697    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:41 GMT
	I0409 01:14:41.928697    7488 round_trippers.go:587]     Audit-Id: de86a095-8831-4dd6-aa3f-817a4c9b8247
	I0409 01:14:41.929028    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:42.419054    7488 type.go:168] "Request Body" body=""
	I0409 01:14:42.419439    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:42.419514    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:42.419543    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:42.419543    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:42.426083    7488 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0409 01:14:42.426083    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:42.426083    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:42.426083    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:42.426083    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:42.426083    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:42 GMT
	I0409 01:14:42.426260    7488 round_trippers.go:587]     Audit-Id: 8ca4ba7b-a70e-4339-95d5-5539ea0d5a84
	I0409 01:14:42.426260    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:42.426308    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:42.426967    7488 type.go:168] "Request Body" body=""
	I0409 01:14:42.426999    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:42.426999    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:42.426999    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:42.426999    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:42.430485    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:42.430485    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:42.430485    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:42.430485    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:42.430485    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:42.430485    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:42 GMT
	I0409 01:14:42.430485    7488 round_trippers.go:587]     Audit-Id: c0cf7400-ec2c-4055-84d6-80669e212fc4
	I0409 01:14:42.430485    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:42.430613    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:42.918760    7488 type.go:168] "Request Body" body=""
	I0409 01:14:42.918760    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:42.918760    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:42.918760    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:42.918760    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:42.923522    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:42.923686    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:42.923686    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:42.923686    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:42 GMT
	I0409 01:14:42.923686    7488 round_trippers.go:587]     Audit-Id: cf2797fd-717d-47f8-928e-cb2575b81215
	I0409 01:14:42.923686    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:42.923686    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:42.923686    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:42.923686    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:42.924504    7488 type.go:168] "Request Body" body=""
	I0409 01:14:42.924575    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:42.924575    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:42.924575    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:42.924575    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:42.927516    7488 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 01:14:42.927516    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:42.927516    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:42 GMT
	I0409 01:14:42.927516    7488 round_trippers.go:587]     Audit-Id: 43bcf67d-ce47-4ef6-96e5-d170d461c1c6
	I0409 01:14:42.927516    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:42.927650    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:42.927650    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:42.927650    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:42.927906    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:43.419674    7488 type.go:168] "Request Body" body=""
	I0409 01:14:43.419674    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:43.419674    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:43.419674    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:43.419674    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:43.423728    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:43.423728    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:43.423728    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:43 GMT
	I0409 01:14:43.423728    7488 round_trippers.go:587]     Audit-Id: 28523091-764c-4bf4-b928-ea8e1b9fce75
	I0409 01:14:43.423728    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:43.423728    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:43.423728    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:43.423728    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:43.423728    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:43.425124    7488 type.go:168] "Request Body" body=""
	I0409 01:14:43.425300    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:43.425300    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:43.425383    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:43.425383    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:43.429140    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:43.429205    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:43.429205    7488 round_trippers.go:587]     Audit-Id: 8a3f782f-78ed-4cc8-a656-888df7d51dce
	I0409 01:14:43.429281    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:43.429281    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:43.429281    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:43.429281    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:43.429281    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:43 GMT
	I0409 01:14:43.429512    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:43.919914    7488 type.go:168] "Request Body" body=""
	I0409 01:14:43.920065    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:43.920065    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:43.920065    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:43.920065    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:43.924207    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:43.924207    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:43.924207    7488 round_trippers.go:587]     Audit-Id: 83dd3892-d127-4069-82ed-12175a79050a
	I0409 01:14:43.924207    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:43.924326    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:43.924326    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:43.924326    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:43.924326    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:43 GMT
	I0409 01:14:43.925149    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:43.925334    7488 type.go:168] "Request Body" body=""
	I0409 01:14:43.925334    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:43.925334    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:43.925334    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:43.925334    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:43.927162    7488 round_trippers.go:581] Response Status: 200 OK in 1 milliseconds
	I0409 01:14:43.928091    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:43.928091    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:43.928091    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:43.928091    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:43.928183    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:43 GMT
	I0409 01:14:43.928183    7488 round_trippers.go:587]     Audit-Id: 3f9742bd-9e52-4b07-99c3-adc043a7287b
	I0409 01:14:43.928183    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:43.928521    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:43.928619    7488 pod_ready.go:103] pod "kube-controller-manager-multinode-611500" in "kube-system" namespace has status "Ready":"False"
	I0409 01:14:44.419618    7488 type.go:168] "Request Body" body=""
	I0409 01:14:44.419618    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:44.419618    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:44.419618    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:44.419618    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:44.423962    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:44.423962    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:44.423962    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:44.423962    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:44.423962    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:44.423962    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:44 GMT
	I0409 01:14:44.423962    7488 round_trippers.go:587]     Audit-Id: 18fdad6b-3537-432a-9778-69ab0fcc589e
	I0409 01:14:44.423962    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:44.425859    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:44.426000    7488 type.go:168] "Request Body" body=""
	I0409 01:14:44.426000    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:44.426000    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:44.426000    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:44.426000    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:44.429847    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:44.430535    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:44.430535    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:44 GMT
	I0409 01:14:44.430535    7488 round_trippers.go:587]     Audit-Id: 68e4aa3e-6d60-448c-b71b-d96ee7e78ee6
	I0409 01:14:44.430535    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:44.430535    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:44.430535    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:44.430535    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:44.431135    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:44.918844    7488 type.go:168] "Request Body" body=""
	I0409 01:14:44.918844    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:44.918844    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:44.918844    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:44.918844    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:44.924674    7488 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0409 01:14:44.924780    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:44.924780    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:44.924780    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:44.924780    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:44.924780    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:44 GMT
	I0409 01:14:44.924780    7488 round_trippers.go:587]     Audit-Id: a2fb0a9d-377f-48d2-b721-a9fe6e80d937
	I0409 01:14:44.924840    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:44.925001    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  e4 31 0a 9c 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.1....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 38 38 38 00 42 08  |ec96062.19888.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 30570 chars]
	 >
	I0409 01:14:44.925823    7488 type.go:168] "Request Body" body=""
	I0409 01:14:44.925823    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:44.925935    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:44.925935    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:44.925935    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:44.929291    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:44.929491    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:44.929491    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:44.929491    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:44.929491    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:44.929491    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:44 GMT
	I0409 01:14:44.929491    7488 round_trippers.go:587]     Audit-Id: c5ceb222-6933-465c-a2c6-2433e4349138
	I0409 01:14:44.929491    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:44.929491    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:44.929491    7488 pod_ready.go:93] pod "kube-controller-manager-multinode-611500" in "kube-system" namespace has status "Ready":"True"
	I0409 01:14:44.929491    7488 pod_ready.go:82] duration metric: took 12.5108216s for pod "kube-controller-manager-multinode-611500" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:44.929491    7488 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bhjnx" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:44.930024    7488 type.go:168] "Request Body" body=""
	I0409 01:14:44.930167    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bhjnx
	I0409 01:14:44.930167    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:44.930167    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:44.930167    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:44.935246    7488 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0409 01:14:44.935246    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:44.935246    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:44.935246    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:44.935246    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:44.935246    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:44.935246    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:44 GMT
	I0409 01:14:44.935246    7488 round_trippers.go:587]     Audit-Id: 37899556-4b16-40bf-9c08-a0d91019d95f
	I0409 01:14:44.935975    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  af 25 0a c1 14 0a 10 6b  75 62 65 2d 70 72 6f 78  |.%.....kube-prox|
		00000020  79 2d 62 68 6a 6e 78 12  0b 6b 75 62 65 2d 70 72  |y-bhjnx..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 61 66 62  36 64 61 39 39 2d 64 65  |m".*$afb6da99-de|
		00000050  39 39 2d 34 39 63 34 2d  62 30 38 30 2d 38 35 30  |99-49c4-b080-850|
		00000060  30 62 34 62 30 38 64 39  62 32 03 36 32 35 38 00  |0b4b08d9b2.6258.|
		00000070  42 08 08 d1 89 d7 bf 06  10 00 5a 26 0a 18 63 6f  |B.........Z&..co|
		00000080  6e 74 72 6f 6c 6c 65 72  2d 72 65 76 69 73 69 6f  |ntroller-revisio|
		00000090  6e 2d 68 61 73 68 12 0a  37 62 62 38 34 63 34 39  |n-hash..7bb84c49|
		000000a0  38 34 5a 15 0a 07 6b 38  73 2d 61 70 70 12 0a 6b  |84Z...k8s-app..k|
		000000b0  75 62 65 2d 70 72 6f 78  79 5a 1c 0a 17 70 6f 64  |ube-proxyZ...pod|
		000000c0  2d 74 65 6d 70 6c 61 74  65 2d 67 65 6e 65 72 61  |-template-gener [truncated 22744 chars]
	 >
	I0409 01:14:44.935975    7488 type.go:168] "Request Body" body=""
	I0409 01:14:44.935975    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500-m02
	I0409 01:14:44.935975    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:44.935975    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:44.935975    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:44.938369    7488 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 01:14:44.938369    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:44.938369    7488 round_trippers.go:587]     Audit-Id: f2d09414-548b-4455-8dc7-5f0939635475
	I0409 01:14:44.938369    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:44.938369    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:44.938369    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:44.938369    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:44.938369    7488 round_trippers.go:587]     Content-Length: 3466
	I0409 01:14:44.938369    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:44 GMT
	I0409 01:14:44.939427    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 f3 1a 0a b0 0f 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 04 31 37 37 34 38 00  |bd39faf32.17748.|
		00000060  42 08 08 d1 89 d7 bf 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 16113 chars]
	 >
	I0409 01:14:44.939427    7488 pod_ready.go:93] pod "kube-proxy-bhjnx" in "kube-system" namespace has status "Ready":"True"
	I0409 01:14:44.939427    7488 pod_ready.go:82] duration metric: took 9.9362ms for pod "kube-proxy-bhjnx" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:44.939427    7488 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xnh8p" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:44.939427    7488 type.go:168] "Request Body" body=""
	I0409 01:14:44.939427    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xnh8p
	I0409 01:14:44.939427    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:44.939427    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:44.939427    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:44.942108    7488 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 01:14:44.942108    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:44.942108    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:44 GMT
	I0409 01:14:44.942108    7488 round_trippers.go:587]     Audit-Id: 76b32b12-8b9f-4747-8587-6309e900ebd7
	I0409 01:14:44.942108    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:44.942108    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:44.942108    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:44.942108    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:44.943265    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b4 26 0a c5 15 0a 10 6b  75 62 65 2d 70 72 6f 78  |.&.....kube-prox|
		00000020  79 2d 78 6e 68 38 70 12  0b 6b 75 62 65 2d 70 72  |y-xnh8p..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 65 64 38  65 39 34 34 65 2d 65 37  |m".*$ed8e944e-e7|
		00000050  33 64 2d 34 34 34 63 2d  62 31 65 65 2d 64 37 31  |3d-444c-b1ee-d71|
		00000060  35 35 63 37 37 31 63 39  36 32 04 31 38 31 31 38  |55c771c962.18118|
		00000070  00 42 08 08 f5 8b d7 bf  06 10 00 5a 26 0a 18 63  |.B.........Z&..c|
		00000080  6f 6e 74 72 6f 6c 6c 65  72 2d 72 65 76 69 73 69  |ontroller-revisi|
		00000090  6f 6e 2d 68 61 73 68 12  0a 37 62 62 38 34 63 34  |on-hash..7bb84c4|
		000000a0  39 38 34 5a 15 0a 07 6b  38 73 2d 61 70 70 12 0a  |984Z...k8s-app..|
		000000b0  6b 75 62 65 2d 70 72 6f  78 79 5a 1c 0a 17 70 6f  |kube-proxyZ...po|
		000000c0  64 2d 74 65 6d 70 6c 61  74 65 2d 67 65 6e 65 72  |d-template-gene [truncated 23381 chars]
	 >
	I0409 01:14:44.943327    7488 type.go:168] "Request Body" body=""
	I0409 01:14:44.943327    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500-m03
	I0409 01:14:44.943327    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:44.943327    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:44.943327    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:44.947005    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:44.947005    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:44.947469    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:44.947469    7488 round_trippers.go:587]     Content-Length: 3885
	I0409 01:14:44.947469    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:44 GMT
	I0409 01:14:44.947469    7488 round_trippers.go:587]     Audit-Id: 786a32a3-bbd7-4372-920e-aa866bf04237
	I0409 01:14:44.947469    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:44.947510    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:44.947510    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:44.947807    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 96 1e 0a eb 12 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 33 12 00 1a 00  |e-611500-m03....|
		00000030  22 00 2a 24 38 63 66 33  37 34 64 36 2d 31 66 62  |".*$8cf374d6-1fb|
		00000040  30 2d 34 30 36 38 2d 39  62 66 39 2d 30 62 32 37  |0-4068-9bf9-0b27|
		00000050  61 34 32 61 63 66 34 39  32 04 31 39 38 33 38 00  |a42acf492.19838.|
		00000060  42 08 08 a0 91 d7 bf 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 18170 chars]
	 >
	I0409 01:14:44.947807    7488 pod_ready.go:98] node "multinode-611500-m03" hosting pod "kube-proxy-xnh8p" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-611500-m03" has status "Ready":"Unknown"
	I0409 01:14:44.947807    7488 pod_ready.go:82] duration metric: took 8.3796ms for pod "kube-proxy-xnh8p" in "kube-system" namespace to be "Ready" ...
	E0409 01:14:44.947807    7488 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-611500-m03" hosting pod "kube-proxy-xnh8p" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-611500-m03" has status "Ready":"Unknown"
	I0409 01:14:44.947807    7488 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zxxgf" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:44.947807    7488 type.go:168] "Request Body" body=""
	I0409 01:14:44.947807    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zxxgf
	I0409 01:14:44.947807    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:44.947807    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:44.947807    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:44.951574    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:44.951574    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:44.951574    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:44.951574    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:44.951574    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:44.951574    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:44.952036    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:44 GMT
	I0409 01:14:44.952036    7488 round_trippers.go:587]     Audit-Id: ad40fb34-ba59-45b5-8d42-a11a9eb73753
	I0409 01:14:44.953016    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  96 26 0a c2 14 0a 10 6b  75 62 65 2d 70 72 6f 78  |.&.....kube-prox|
		00000020  79 2d 7a 78 78 67 66 12  0b 6b 75 62 65 2d 70 72  |y-zxxgf..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 33 35 30  36 65 65 65 37 2d 64 39  |m".*$3506eee7-d9|
		00000050  34 36 2d 34 64 64 65 2d  39 31 63 39 2d 39 66 63  |46-4dde-91c9-9fc|
		00000060  35 63 31 34 37 34 34 33  34 32 04 31 39 33 32 38  |5c14744342.19328|
		00000070  00 42 08 08 96 88 d7 bf  06 10 00 5a 26 0a 18 63  |.B.........Z&..c|
		00000080  6f 6e 74 72 6f 6c 6c 65  72 2d 72 65 76 69 73 69  |ontroller-revisi|
		00000090  6f 6e 2d 68 61 73 68 12  0a 37 62 62 38 34 63 34  |on-hash..7bb84c4|
		000000a0  39 38 34 5a 15 0a 07 6b  38 73 2d 61 70 70 12 0a  |984Z...k8s-app..|
		000000b0  6b 75 62 65 2d 70 72 6f  78 79 5a 1c 0a 17 70 6f  |kube-proxyZ...po|
		000000c0  64 2d 74 65 6d 70 6c 61  74 65 2d 67 65 6e 65 72  |d-template-gene [truncated 23225 chars]
	 >
	I0409 01:14:44.953285    7488 type.go:168] "Request Body" body=""
	I0409 01:14:44.953358    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:44.953358    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:44.953408    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:44.953408    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:44.956009    7488 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 01:14:44.956058    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:44.956058    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:44 GMT
	I0409 01:14:44.956132    7488 round_trippers.go:587]     Audit-Id: 5aea1c8c-17b9-4705-b4b1-4fee2f869a28
	I0409 01:14:44.956132    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:44.956132    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:44.956132    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:44.956132    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:44.956958    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:44.957009    7488 pod_ready.go:93] pod "kube-proxy-zxxgf" in "kube-system" namespace has status "Ready":"True"
	I0409 01:14:44.957009    7488 pod_ready.go:82] duration metric: took 9.2013ms for pod "kube-proxy-zxxgf" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:44.957009    7488 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-611500" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:44.957009    7488 type.go:168] "Request Body" body=""
	I0409 01:14:44.957009    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-611500
	I0409 01:14:44.957009    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:44.957009    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:44.957009    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:44.962375    7488 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0409 01:14:44.962375    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:44.962375    7488 round_trippers.go:587]     Audit-Id: a774ce8c-3a2d-4734-bea5-14e99139eec1
	I0409 01:14:44.962375    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:44.962375    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:44.962375    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:44.962375    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:44.962375    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:44 GMT
	I0409 01:14:44.963044    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  ef 23 0a 84 18 0a 1f 6b  75 62 65 2d 73 63 68 65  |.#.....kube-sche|
		00000020  64 75 6c 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |duler-multinode-|
		00000030  36 31 31 35 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |611500....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 39 31 38 35 64 35 63  |ystem".*$9185d5c|
		00000050  30 2d 62 32 38 61 2d 34  33 38 63 2d 62 30 35 61  |0-b28a-438c-b05a|
		00000060  2d 36 34 36 36 37 65 34  61 63 33 64 37 32 04 31  |-64667e4ac3d72.1|
		00000070  38 35 33 38 00 42 08 08  90 88 d7 bf 06 10 00 5a  |8538.B.........Z|
		00000080  1b 0a 09 63 6f 6d 70 6f  6e 65 6e 74 12 0e 6b 75  |...component..ku|
		00000090  62 65 2d 73 63 68 65 64  75 6c 65 72 5a 15 0a 04  |be-schedulerZ...|
		000000a0  74 69 65 72 12 0d 63 6f  6e 74 72 6f 6c 2d 70 6c  |tier..control-pl|
		000000b0  61 6e 65 62 3d 0a 19 6b  75 62 65 72 6e 65 74 65  |aneb=..kubernete|
		000000c0  73 2e 69 6f 2f 63 6f 6e  66 69 67 2e 68 61 73 68  |s.io/config.has [truncated 21796 chars]
	 >
	I0409 01:14:44.963327    7488 type.go:168] "Request Body" body=""
	I0409 01:14:44.963327    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:44.963327    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:44.963327    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:44.963327    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:44.968296    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:44.968354    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:44.968354    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:44.968354    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:44.968465    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:44.968465    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:44.968465    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:44 GMT
	I0409 01:14:44.968500    7488 round_trippers.go:587]     Audit-Id: 65042669-7c4a-4699-b4b3-25285f535fe2
	I0409 01:14:44.968773    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:44.968773    7488 pod_ready.go:93] pod "kube-scheduler-multinode-611500" in "kube-system" namespace has status "Ready":"True"
	I0409 01:14:44.968773    7488 pod_ready.go:82] duration metric: took 11.7646ms for pod "kube-scheduler-multinode-611500" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:44.968773    7488 pod_ready.go:39] duration metric: took 16.5871816s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0409 01:14:44.968773    7488 api_server.go:52] waiting for apiserver process to appear ...
	I0409 01:14:44.981879    7488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0409 01:14:45.013834    7488 command_runner.go:130] > 2024
	I0409 01:14:45.013970    7488 api_server.go:72] duration metric: took 28.999296s to wait for apiserver process to appear ...
	I0409 01:14:45.013970    7488 api_server.go:88] waiting for apiserver healthz status ...
	I0409 01:14:45.014026    7488 api_server.go:253] Checking apiserver healthz at https://192.168.120.172:8443/healthz ...
	I0409 01:14:45.021850    7488 api_server.go:279] https://192.168.120.172:8443/healthz returned 200:
	ok
	I0409 01:14:45.021850    7488 discovery_client.go:658] "Request Body" body=""
	I0409 01:14:45.021850    7488 round_trippers.go:470] GET https://192.168.120.172:8443/version
	I0409 01:14:45.021850    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:45.021850    7488 round_trippers.go:480]     Accept: application/json, */*
	I0409 01:14:45.021850    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:45.024857    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:45.024881    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:45.024881    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:45.024881    7488 round_trippers.go:587]     Content-Type: application/json
	I0409 01:14:45.024972    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:45.024972    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:45.024972    7488 round_trippers.go:587]     Content-Length: 263
	I0409 01:14:45.024972    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:45 GMT
	I0409 01:14:45.024972    7488 round_trippers.go:587]     Audit-Id: e905e7ce-9570-47e1-92ac-368bd324818a
	I0409 01:14:45.025040    7488 discovery_client.go:658] "Response Body" body=<
		{
		  "major": "1",
		  "minor": "32",
		  "gitVersion": "v1.32.2",
		  "gitCommit": "67a30c0adcf52bd3f56ff0893ce19966be12991f",
		  "gitTreeState": "clean",
		  "buildDate": "2025-02-12T21:19:47Z",
		  "goVersion": "go1.23.6",
		  "compiler": "gc",
		  "platform": "linux/amd64"
		}
	 >
	I0409 01:14:45.025156    7488 api_server.go:141] control plane version: v1.32.2
	I0409 01:14:45.025182    7488 api_server.go:131] duration metric: took 11.2114ms to wait for apiserver health ...
	I0409 01:14:45.025215    7488 system_pods.go:43] waiting for kube-system pods to appear ...
	I0409 01:14:45.025239    7488 type.go:204] "Request Body" body=""
	I0409 01:14:45.119059    7488 request.go:661] Waited for 93.7179ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods
	I0409 01:14:45.119292    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods
	I0409 01:14:45.119292    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:45.119292    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:45.119292    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:45.124200    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:45.124284    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:45.124284    7488 round_trippers.go:587]     Audit-Id: 590cddcd-539b-4788-be4a-345f623f9937
	I0409 01:14:45.124376    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:45.124376    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:45.124376    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:45.124376    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:45.124497    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:45 GMT
	I0409 01:14:45.127632    7488 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 a2 eb 03 0a  0a 0a 00 12 04 31 39 38  |ist..........198|
		00000020  38 1a 00 12 c7 28 0a af  19 0a 18 63 6f 72 65 64  |8....(.....cored|
		00000030  6e 73 2d 36 36 38 64 36  62 66 39 62 63 2d 64 35  |ns-668d6bf9bc-d5|
		00000040  34 73 34 12 13 63 6f 72  65 64 6e 73 2d 36 36 38  |4s4..coredns-668|
		00000050  64 36 62 66 39 62 63 2d  1a 0b 6b 75 62 65 2d 73  |d6bf9bc-..kube-s|
		00000060  79 73 74 65 6d 22 00 2a  24 31 32 34 33 31 66 32  |ystem".*$12431f2|
		00000070  37 2d 37 65 34 65 2d 34  31 63 39 2d 38 64 35 34  |7-7e4e-41c9-8d54|
		00000080  2d 62 63 37 62 65 32 30  37 34 62 39 63 32 04 31  |-bc7be2074b9c2.1|
		00000090  39 37 36 38 00 42 08 08  96 88 d7 bf 06 10 00 5a  |9768.B.........Z|
		000000a0  13 0a 07 6b 38 73 2d 61  70 70 12 08 6b 75 62 65  |...k8s-app..kube|
		000000b0  2d 64 6e 73 5a 1f 0a 11  70 6f 64 2d 74 65 6d 70  |-dnsZ...pod-temp|
		000000c0  6c 61 74 65 2d 68 61 73  68 12 0a 36 36 38 64 36  |late-hash..668d [truncated 309601 chars]
	 >
	I0409 01:14:45.128474    7488 system_pods.go:59] 12 kube-system pods found
	I0409 01:14:45.128606    7488 system_pods.go:61] "coredns-668d6bf9bc-d54s4" [12431f27-7e4e-41c9-8d54-bc7be2074b9c] Running
	I0409 01:14:45.128606    7488 system_pods.go:61] "etcd-multinode-611500" [e6b39b1a-a6d5-46d1-a56a-243c9bb6f563] Running
	I0409 01:14:45.128606    7488 system_pods.go:61] "kindnet-66fr6" [3127adff-6b68-4ae6-8fea-cbee940bb9df] Running
	I0409 01:14:45.128606    7488 system_pods.go:61] "kindnet-v66j5" [9200b124-3c4b-442b-99fd-49ccc2faf534] Running
	I0409 01:14:45.128606    7488 system_pods.go:61] "kindnet-vntlr" [2e088361-08c9-4325-8241-20f5f443dcf6] Running
	I0409 01:14:45.128606    7488 system_pods.go:61] "kube-apiserver-multinode-611500" [f9924754-f8c5-4a8b-9da2-23d8096a5ecf] Running
	I0409 01:14:45.128606    7488 system_pods.go:61] "kube-controller-manager-multinode-611500" [75af0b90-6c72-4624-8660-aa943fec9606] Running
	I0409 01:14:45.128606    7488 system_pods.go:61] "kube-proxy-bhjnx" [afb6da99-de99-49c4-b080-8500b4b08d9b] Running
	I0409 01:14:45.128606    7488 system_pods.go:61] "kube-proxy-xnh8p" [ed8e944e-e73d-444c-b1ee-d7155c771c96] Running
	I0409 01:14:45.128678    7488 system_pods.go:61] "kube-proxy-zxxgf" [3506eee7-d946-4dde-91c9-9fc5c1474434] Running
	I0409 01:14:45.128714    7488 system_pods.go:61] "kube-scheduler-multinode-611500" [9185d5c0-b28a-438c-b05a-64667e4ac3d7] Running
	I0409 01:14:45.128714    7488 system_pods.go:61] "storage-provisioner" [8f7ea37f-c3a7-44fc-ac99-c184b674aca3] Running
	I0409 01:14:45.128714    7488 system_pods.go:74] duration metric: took 103.4738ms to wait for pod list to return data ...
	I0409 01:14:45.128714    7488 default_sa.go:34] waiting for default service account to be created ...
	I0409 01:14:45.128838    7488 type.go:204] "Request Body" body=""
	I0409 01:14:45.319579    7488 request.go:661] Waited for 190.7394ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.120.172:8443/api/v1/namespaces/default/serviceaccounts
	I0409 01:14:45.319803    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/default/serviceaccounts
	I0409 01:14:45.319803    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:45.319803    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:45.319803    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:45.323726    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:45.323726    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:45.323824    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:45.323824    7488 round_trippers.go:587]     Content-Length: 129
	I0409 01:14:45.323824    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:45 GMT
	I0409 01:14:45.323824    7488 round_trippers.go:587]     Audit-Id: d60fc6c7-cea1-4b35-87ed-4038ee20c28d
	I0409 01:14:45.323824    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:45.323824    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:45.323824    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:45.323910    7488 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 18 0a 02  76 31 12 12 53 65 72 76  |k8s.....v1..Serv|
		00000010  69 63 65 41 63 63 6f 75  6e 74 4c 69 73 74 12 5d  |iceAccountList.]|
		00000020  0a 0a 0a 00 12 04 31 39  38 38 1a 00 12 4f 0a 4d  |......1988...O.M|
		00000030  0a 07 64 65 66 61 75 6c  74 12 00 1a 07 64 65 66  |..default....def|
		00000040  61 75 6c 74 22 00 2a 24  35 65 63 37 63 31 66 66  |ault".*$5ec7c1ff|
		00000050  2d 31 63 66 31 2d 34 64  30 32 2d 38 61 65 33 2d  |-1cf1-4d02-8ae3-|
		00000060  35 62 66 35 65 30 39 65  66 33 37 37 32 03 33 32  |5bf5e09ef3772.32|
		00000070  36 38 00 42 08 08 95 88  d7 bf 06 10 00 1a 00 22  |68.B..........."|
		00000080  00                                                |.|
	 >
	I0409 01:14:45.324017    7488 default_sa.go:45] found service account: "default"
	I0409 01:14:45.324017    7488 default_sa.go:55] duration metric: took 195.3007ms for default service account to be created ...
	I0409 01:14:45.324017    7488 system_pods.go:116] waiting for k8s-apps to be running ...
	I0409 01:14:45.324017    7488 type.go:204] "Request Body" body=""
	I0409 01:14:45.519813    7488 request.go:661] Waited for 195.7941ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods
	I0409 01:14:45.520273    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods
	I0409 01:14:45.520273    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:45.520273    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:45.520273    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:45.524962    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:45.525009    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:45.525009    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:45 GMT
	I0409 01:14:45.525009    7488 round_trippers.go:587]     Audit-Id: aa927236-d6b0-4d33-9b82-f63c212cd579
	I0409 01:14:45.525009    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:45.525009    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:45.525009    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:45.525009    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:45.528326    7488 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 a2 eb 03 0a  0a 0a 00 12 04 31 39 38  |ist..........198|
		00000020  38 1a 00 12 c7 28 0a af  19 0a 18 63 6f 72 65 64  |8....(.....cored|
		00000030  6e 73 2d 36 36 38 64 36  62 66 39 62 63 2d 64 35  |ns-668d6bf9bc-d5|
		00000040  34 73 34 12 13 63 6f 72  65 64 6e 73 2d 36 36 38  |4s4..coredns-668|
		00000050  64 36 62 66 39 62 63 2d  1a 0b 6b 75 62 65 2d 73  |d6bf9bc-..kube-s|
		00000060  79 73 74 65 6d 22 00 2a  24 31 32 34 33 31 66 32  |ystem".*$12431f2|
		00000070  37 2d 37 65 34 65 2d 34  31 63 39 2d 38 64 35 34  |7-7e4e-41c9-8d54|
		00000080  2d 62 63 37 62 65 32 30  37 34 62 39 63 32 04 31  |-bc7be2074b9c2.1|
		00000090  39 37 36 38 00 42 08 08  96 88 d7 bf 06 10 00 5a  |9768.B.........Z|
		000000a0  13 0a 07 6b 38 73 2d 61  70 70 12 08 6b 75 62 65  |...k8s-app..kube|
		000000b0  2d 64 6e 73 5a 1f 0a 11  70 6f 64 2d 74 65 6d 70  |-dnsZ...pod-temp|
		000000c0  6c 61 74 65 2d 68 61 73  68 12 0a 36 36 38 64 36  |late-hash..668d [truncated 309601 chars]
	 >
	I0409 01:14:45.529089    7488 system_pods.go:86] 12 kube-system pods found
	I0409 01:14:45.529152    7488 system_pods.go:89] "coredns-668d6bf9bc-d54s4" [12431f27-7e4e-41c9-8d54-bc7be2074b9c] Running
	I0409 01:14:45.529152    7488 system_pods.go:89] "etcd-multinode-611500" [e6b39b1a-a6d5-46d1-a56a-243c9bb6f563] Running
	I0409 01:14:45.529152    7488 system_pods.go:89] "kindnet-66fr6" [3127adff-6b68-4ae6-8fea-cbee940bb9df] Running
	I0409 01:14:45.529152    7488 system_pods.go:89] "kindnet-v66j5" [9200b124-3c4b-442b-99fd-49ccc2faf534] Running
	I0409 01:14:45.529234    7488 system_pods.go:89] "kindnet-vntlr" [2e088361-08c9-4325-8241-20f5f443dcf6] Running
	I0409 01:14:45.529234    7488 system_pods.go:89] "kube-apiserver-multinode-611500" [f9924754-f8c5-4a8b-9da2-23d8096a5ecf] Running
	I0409 01:14:45.529234    7488 system_pods.go:89] "kube-controller-manager-multinode-611500" [75af0b90-6c72-4624-8660-aa943fec9606] Running
	I0409 01:14:45.529234    7488 system_pods.go:89] "kube-proxy-bhjnx" [afb6da99-de99-49c4-b080-8500b4b08d9b] Running
	I0409 01:14:45.529234    7488 system_pods.go:89] "kube-proxy-xnh8p" [ed8e944e-e73d-444c-b1ee-d7155c771c96] Running
	I0409 01:14:45.529234    7488 system_pods.go:89] "kube-proxy-zxxgf" [3506eee7-d946-4dde-91c9-9fc5c1474434] Running
	I0409 01:14:45.529234    7488 system_pods.go:89] "kube-scheduler-multinode-611500" [9185d5c0-b28a-438c-b05a-64667e4ac3d7] Running
	I0409 01:14:45.529288    7488 system_pods.go:89] "storage-provisioner" [8f7ea37f-c3a7-44fc-ac99-c184b674aca3] Running
	I0409 01:14:45.529288    7488 system_pods.go:126] duration metric: took 205.2683ms to wait for k8s-apps to be running ...
	I0409 01:14:45.529320    7488 system_svc.go:44] waiting for kubelet service to be running ....
	I0409 01:14:45.538883    7488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0409 01:14:45.567550    7488 system_svc.go:56] duration metric: took 38.2302ms WaitForService to wait for kubelet
	I0409 01:14:45.567550    7488 kubeadm.go:582] duration metric: took 29.5530053s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0409 01:14:45.567550    7488 node_conditions.go:102] verifying NodePressure condition ...
	I0409 01:14:45.567550    7488 type.go:204] "Request Body" body=""
	I0409 01:14:45.719878    7488 request.go:661] Waited for 152.3256ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.120.172:8443/api/v1/nodes
	I0409 01:14:45.720215    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes
	I0409 01:14:45.720215    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:45.720215    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:45.720215    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:45.723687    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:45.723687    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:45.723687    7488 round_trippers.go:587]     Audit-Id: fe63e8a5-42f1-4590-8d1f-b24e839c954a
	I0409 01:14:45.723687    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:45.723687    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:45.723687    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:45.723687    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:45.723687    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:45 GMT
	I0409 01:14:45.724465    7488 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0e 0a 02  76 31 12 08 4e 6f 64 65  |k8s.....v1..Node|
		00000010  4c 69 73 74 12 f3 5d 0a  0a 0a 00 12 04 31 39 38  |List..]......198|
		00000020  38 1a 00 12 d5 24 0a f8  11 0a 10 6d 75 6c 74 69  |8....$.....multi|
		00000030  6e 6f 64 65 2d 36 31 31  35 30 30 12 00 1a 00 22  |node-611500...."|
		00000040  00 2a 24 62 31 32 35 32  66 34 61 2d 32 32 33 30  |.*$b1252f4a-2230|
		00000050  2d 34 36 61 36 2d 39 33  38 62 2d 37 63 30 37 31  |-46a6-938b-7c071|
		00000060  31 31 33 33 34 32 34 32  04 31 39 35 39 38 00 42  |11334242.19598.B|
		00000070  08 08 8d 88 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000080  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000090  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		000000a0  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000b0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000c0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/ar [truncated 58461 chars]
	 >
	I0409 01:14:45.724799    7488 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0409 01:14:45.724856    7488 node_conditions.go:123] node cpu capacity is 2
	I0409 01:14:45.724856    7488 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0409 01:14:45.724856    7488 node_conditions.go:123] node cpu capacity is 2
	I0409 01:14:45.724969    7488 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0409 01:14:45.724969    7488 node_conditions.go:123] node cpu capacity is 2
	I0409 01:14:45.724969    7488 node_conditions.go:105] duration metric: took 157.4164ms to run NodePressure ...
	I0409 01:14:45.724969    7488 start.go:241] waiting for startup goroutines ...
	I0409 01:14:45.724969    7488 start.go:246] waiting for cluster config update ...
	I0409 01:14:45.725064    7488 start.go:255] writing updated cluster config ...
	I0409 01:14:45.730329    7488 out.go:201] 
	I0409 01:14:45.733583    7488 config.go:182] Loaded profile config "ha-061400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0409 01:14:45.747495    7488 config.go:182] Loaded profile config "multinode-611500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0409 01:14:45.747495    7488 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\config.json ...
	I0409 01:14:45.756344    7488 out.go:177] * Starting "multinode-611500-m02" worker node in "multinode-611500" cluster
	I0409 01:14:45.758852    7488 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0409 01:14:45.758852    7488 cache.go:56] Caching tarball of preloaded images
	I0409 01:14:45.759628    7488 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0409 01:14:45.760058    7488 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0409 01:14:45.760058    7488 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\config.json ...
	I0409 01:14:45.762862    7488 start.go:360] acquireMachinesLock for multinode-611500-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0409 01:14:45.763089    7488 start.go:364] duration metric: took 125.4µs to acquireMachinesLock for "multinode-611500-m02"
	I0409 01:14:45.763252    7488 start.go:96] Skipping create...Using existing machine configuration
	I0409 01:14:45.763252    7488 fix.go:54] fixHost starting: m02
	I0409 01:14:45.763887    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 01:14:47.936148    7488 main.go:141] libmachine: [stdout =====>] : Off
	
	I0409 01:14:47.936148    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:14:47.936148    7488 fix.go:112] recreateIfNeeded on multinode-611500-m02: state=Stopped err=<nil>
	W0409 01:14:47.936148    7488 fix.go:138] unexpected machine state, will restart: <nil>
	I0409 01:14:47.940871    7488 out.go:177] * Restarting existing hyperv VM for "multinode-611500-m02" ...
	I0409 01:14:47.943633    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-611500-m02
	I0409 01:14:51.034142    7488 main.go:141] libmachine: [stdout =====>] : 
	I0409 01:14:51.034142    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:14:51.034142    7488 main.go:141] libmachine: Waiting for host to start...
	I0409 01:14:51.035180    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 01:14:53.339184    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:14:53.339184    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:14:53.339781    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 01:14:55.843529    7488 main.go:141] libmachine: [stdout =====>] : 
	I0409 01:14:55.843529    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:14:56.844294    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 01:14:59.094874    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:14:59.094874    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:14:59.094874    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 01:15:01.722254    7488 main.go:141] libmachine: [stdout =====>] : 
	I0409 01:15:01.722254    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:02.722943    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 01:15:04.920427    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:15:04.920427    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:04.920427    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 01:15:07.494375    7488 main.go:141] libmachine: [stdout =====>] : 
	I0409 01:15:07.495062    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:08.496101    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 01:15:10.692006    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:15:10.692006    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:10.692006    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 01:15:13.201103    7488 main.go:141] libmachine: [stdout =====>] : 
	I0409 01:15:13.201103    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:14.203172    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 01:15:16.436209    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:15:16.436209    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:16.436717    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 01:15:19.030179    7488 main.go:141] libmachine: [stdout =====>] : 192.168.114.152
	
	I0409 01:15:19.030179    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:19.034265    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 01:15:21.152735    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:15:21.152735    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:21.152735    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 01:15:23.678416    7488 main.go:141] libmachine: [stdout =====>] : 192.168.114.152
	
	I0409 01:15:23.678486    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:23.678600    7488 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\config.json ...
	I0409 01:15:23.681530    7488 machine.go:93] provisionDockerMachine start ...
	I0409 01:15:23.681530    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 01:15:25.832645    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:15:25.832645    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:25.832727    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 01:15:28.357988    7488 main.go:141] libmachine: [stdout =====>] : 192.168.114.152
	
	I0409 01:15:28.357988    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:28.364553    7488 main.go:141] libmachine: Using SSH client type: native
	I0409 01:15:28.365311    7488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.114.152 22 <nil> <nil>}
	I0409 01:15:28.365311    7488 main.go:141] libmachine: About to run SSH command:
	hostname
	I0409 01:15:28.514656    7488 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0409 01:15:28.514656    7488 buildroot.go:166] provisioning hostname "multinode-611500-m02"
	I0409 01:15:28.514656    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 01:15:30.731578    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:15:30.731578    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:30.731578    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 01:15:33.270811    7488 main.go:141] libmachine: [stdout =====>] : 192.168.114.152
	
	I0409 01:15:33.271368    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:33.278535    7488 main.go:141] libmachine: Using SSH client type: native
	I0409 01:15:33.278535    7488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.114.152 22 <nil> <nil>}
	I0409 01:15:33.278535    7488 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-611500-m02 && echo "multinode-611500-m02" | sudo tee /etc/hostname
	I0409 01:15:33.447119    7488 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-611500-m02
	
	I0409 01:15:33.447196    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 01:15:35.551098    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:15:35.551636    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:35.551817    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 01:15:38.097233    7488 main.go:141] libmachine: [stdout =====>] : 192.168.114.152
	
	I0409 01:15:38.097233    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:38.103092    7488 main.go:141] libmachine: Using SSH client type: native
	I0409 01:15:38.104072    7488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.114.152 22 <nil> <nil>}
	I0409 01:15:38.104072    7488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-611500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-611500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-611500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0409 01:15:38.268677    7488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0409 01:15:38.268677    7488 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0409 01:15:38.268825    7488 buildroot.go:174] setting up certificates
	I0409 01:15:38.268825    7488 provision.go:84] configureAuth start
	I0409 01:15:38.268825    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 01:15:40.396708    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:15:40.396773    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:40.396773    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 01:15:42.884035    7488 main.go:141] libmachine: [stdout =====>] : 192.168.114.152
	
	I0409 01:15:42.884338    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:42.884338    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 01:15:45.021881    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:15:45.022115    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:45.022231    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 01:15:47.543154    7488 main.go:141] libmachine: [stdout =====>] : 192.168.114.152
	
	I0409 01:15:47.543731    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:47.543785    7488 provision.go:143] copyHostCerts
	I0409 01:15:47.543785    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0409 01:15:47.543785    7488 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0409 01:15:47.544305    7488 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0409 01:15:47.544506    7488 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0409 01:15:47.545752    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0409 01:15:47.545752    7488 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0409 01:15:47.546281    7488 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0409 01:15:47.546480    7488 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0409 01:15:47.547835    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0409 01:15:47.547890    7488 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0409 01:15:47.547890    7488 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0409 01:15:47.548419    7488 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0409 01:15:47.550054    7488 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-611500-m02 san=[127.0.0.1 192.168.114.152 localhost minikube multinode-611500-m02]
	I0409 01:15:47.601818    7488 provision.go:177] copyRemoteCerts
	I0409 01:15:47.609973    7488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0409 01:15:47.609973    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 01:15:49.734646    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:15:49.734646    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:49.734788    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 01:15:52.304326    7488 main.go:141] libmachine: [stdout =====>] : 192.168.114.152
	
	I0409 01:15:52.304468    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:52.305263    7488 sshutil.go:53] new ssh client: &{IP:192.168.114.152 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-611500-m02\id_rsa Username:docker}
	I0409 01:15:52.418555    7488 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8080938s)
	I0409 01:15:52.418601    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0409 01:15:52.419045    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0409 01:15:52.464199    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0409 01:15:52.464595    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0409 01:15:52.512719    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0409 01:15:52.512780    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0409 01:15:52.561075    7488 provision.go:87] duration metric: took 14.2920682s to configureAuth
	I0409 01:15:52.561075    7488 buildroot.go:189] setting minikube options for container-runtime
	I0409 01:15:52.562279    7488 config.go:182] Loaded profile config "multinode-611500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0409 01:15:52.562350    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 01:15:54.686908    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:15:54.687500    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:54.687500    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 01:15:57.188617    7488 main.go:141] libmachine: [stdout =====>] : 192.168.114.152
	
	I0409 01:15:57.188673    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:57.194328    7488 main.go:141] libmachine: Using SSH client type: native
	I0409 01:15:57.194950    7488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.114.152 22 <nil> <nil>}
	I0409 01:15:57.194950    7488 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0409 01:15:57.336329    7488 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0409 01:15:57.336329    7488 buildroot.go:70] root file system type: tmpfs
	I0409 01:15:57.336329    7488 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0409 01:15:57.336329    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 01:15:59.454348    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:15:59.454348    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:59.455210    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 01:16:01.959448    7488 main.go:141] libmachine: [stdout =====>] : 192.168.114.152
	
	I0409 01:16:01.959886    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:16:01.965179    7488 main.go:141] libmachine: Using SSH client type: native
	I0409 01:16:01.966055    7488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.114.152 22 <nil> <nil>}
	I0409 01:16:01.966055    7488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.120.172"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0409 01:16:02.136257    7488 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.120.172
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0409 01:16:02.136257    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 01:16:04.246481    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:16:04.247337    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:16:04.247337    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 01:16:06.772922    7488 main.go:141] libmachine: [stdout =====>] : 192.168.114.152
	
	I0409 01:16:06.773715    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:16:06.778866    7488 main.go:141] libmachine: Using SSH client type: native
	I0409 01:16:06.779487    7488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.114.152 22 <nil> <nil>}
	I0409 01:16:06.779487    7488 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0409 01:16:09.221234    7488 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0409 01:16:09.221234    7488 machine.go:96] duration metric: took 45.5391261s to provisionDockerMachine
	I0409 01:16:09.221234    7488 start.go:293] postStartSetup for "multinode-611500-m02" (driver="hyperv")
	I0409 01:16:09.221234    7488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0409 01:16:09.233677    7488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0409 01:16:09.233677    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 01:16:11.439541    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:16:11.439541    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:16:11.440493    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 01:16:13.959202    7488 main.go:141] libmachine: [stdout =====>] : 192.168.114.152
	
	I0409 01:16:13.959202    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:16:13.960674    7488 sshutil.go:53] new ssh client: &{IP:192.168.114.152 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-611500-m02\id_rsa Username:docker}
	I0409 01:16:14.077356    7488 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8436178s)
	I0409 01:16:14.089661    7488 ssh_runner.go:195] Run: cat /etc/os-release
	I0409 01:16:14.096694    7488 command_runner.go:130] > NAME=Buildroot
	I0409 01:16:14.096694    7488 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0409 01:16:14.096694    7488 command_runner.go:130] > ID=buildroot
	I0409 01:16:14.096694    7488 command_runner.go:130] > VERSION_ID=2023.02.9
	I0409 01:16:14.096694    7488 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0409 01:16:14.096694    7488 info.go:137] Remote host: Buildroot 2023.02.9
	I0409 01:16:14.096694    7488 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0409 01:16:14.096694    7488 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0409 01:16:14.097740    7488 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> 98642.pem in /etc/ssl/certs
	I0409 01:16:14.097740    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> /etc/ssl/certs/98642.pem
	I0409 01:16:14.107219    7488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0409 01:16:14.125906    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem --> /etc/ssl/certs/98642.pem (1708 bytes)
	I0409 01:16:14.176812    7488 start.go:296] duration metric: took 4.9555152s for postStartSetup
	I0409 01:16:14.176910    7488 fix.go:56] duration metric: took 1m28.4125355s for fixHost
	I0409 01:16:14.176949    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 01:16:16.289509    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:16:16.290271    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:16:16.290271    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p multinode-611500" : exit status 1
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-611500
multinode_test.go:331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node list -p multinode-611500: context deadline exceeded (0s)
multinode_test.go:333: failed to run node list. args "out/minikube-windows-amd64.exe node list -p multinode-611500" : context deadline exceeded
multinode_test.go:338: reported node list is not the same after restart. Before restart: multinode-611500	192.168.113.157
multinode-611500-m02	192.168.113.143
multinode-611500-m03	192.168.116.185

                                                
                                                
After restart: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-611500 -n multinode-611500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-611500 -n multinode-611500: (12.2190446s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-611500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-611500 logs -n 25: (9.2266603s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| cp      | multinode-611500 cp testdata\cp-test.txt                                                                                 | multinode-611500 | minikube6\jenkins | v1.35.0 | 09 Apr 25 01:01 UTC | 09 Apr 25 01:01 UTC |
	|         | multinode-611500-m02:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-611500 ssh -n                                                                                                  | multinode-611500 | minikube6\jenkins | v1.35.0 | 09 Apr 25 01:01 UTC | 09 Apr 25 01:01 UTC |
	|         | multinode-611500-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-611500 cp multinode-611500-m02:/home/docker/cp-test.txt                                                        | multinode-611500 | minikube6\jenkins | v1.35.0 | 09 Apr 25 01:01 UTC | 09 Apr 25 01:02 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile4275839031\001\cp-test_multinode-611500-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-611500 ssh -n                                                                                                  | multinode-611500 | minikube6\jenkins | v1.35.0 | 09 Apr 25 01:02 UTC | 09 Apr 25 01:02 UTC |
	|         | multinode-611500-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-611500 cp multinode-611500-m02:/home/docker/cp-test.txt                                                        | multinode-611500 | minikube6\jenkins | v1.35.0 | 09 Apr 25 01:02 UTC | 09 Apr 25 01:02 UTC |
	|         | multinode-611500:/home/docker/cp-test_multinode-611500-m02_multinode-611500.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-611500 ssh -n                                                                                                  | multinode-611500 | minikube6\jenkins | v1.35.0 | 09 Apr 25 01:02 UTC | 09 Apr 25 01:02 UTC |
	|         | multinode-611500-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-611500 ssh -n multinode-611500 sudo cat                                                                        | multinode-611500 | minikube6\jenkins | v1.35.0 | 09 Apr 25 01:02 UTC | 09 Apr 25 01:02 UTC |
	|         | /home/docker/cp-test_multinode-611500-m02_multinode-611500.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-611500 cp multinode-611500-m02:/home/docker/cp-test.txt                                                        | multinode-611500 | minikube6\jenkins | v1.35.0 | 09 Apr 25 01:02 UTC | 09 Apr 25 01:03 UTC |
	|         | multinode-611500-m03:/home/docker/cp-test_multinode-611500-m02_multinode-611500-m03.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-611500 ssh -n                                                                                                  | multinode-611500 | minikube6\jenkins | v1.35.0 | 09 Apr 25 01:03 UTC | 09 Apr 25 01:03 UTC |
	|         | multinode-611500-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-611500 ssh -n multinode-611500-m03 sudo cat                                                                    | multinode-611500 | minikube6\jenkins | v1.35.0 | 09 Apr 25 01:03 UTC | 09 Apr 25 01:03 UTC |
	|         | /home/docker/cp-test_multinode-611500-m02_multinode-611500-m03.txt                                                       |                  |                   |         |                     |                     |
	| cp      | multinode-611500 cp testdata\cp-test.txt                                                                                 | multinode-611500 | minikube6\jenkins | v1.35.0 | 09 Apr 25 01:03 UTC | 09 Apr 25 01:03 UTC |
	|         | multinode-611500-m03:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-611500 ssh -n                                                                                                  | multinode-611500 | minikube6\jenkins | v1.35.0 | 09 Apr 25 01:03 UTC | 09 Apr 25 01:03 UTC |
	|         | multinode-611500-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-611500 cp multinode-611500-m03:/home/docker/cp-test.txt                                                        | multinode-611500 | minikube6\jenkins | v1.35.0 | 09 Apr 25 01:03 UTC | 09 Apr 25 01:03 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile4275839031\001\cp-test_multinode-611500-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-611500 ssh -n                                                                                                  | multinode-611500 | minikube6\jenkins | v1.35.0 | 09 Apr 25 01:03 UTC | 09 Apr 25 01:04 UTC |
	|         | multinode-611500-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-611500 cp multinode-611500-m03:/home/docker/cp-test.txt                                                        | multinode-611500 | minikube6\jenkins | v1.35.0 | 09 Apr 25 01:04 UTC | 09 Apr 25 01:04 UTC |
	|         | multinode-611500:/home/docker/cp-test_multinode-611500-m03_multinode-611500.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-611500 ssh -n                                                                                                  | multinode-611500 | minikube6\jenkins | v1.35.0 | 09 Apr 25 01:04 UTC | 09 Apr 25 01:04 UTC |
	|         | multinode-611500-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-611500 ssh -n multinode-611500 sudo cat                                                                        | multinode-611500 | minikube6\jenkins | v1.35.0 | 09 Apr 25 01:04 UTC | 09 Apr 25 01:04 UTC |
	|         | /home/docker/cp-test_multinode-611500-m03_multinode-611500.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-611500 cp multinode-611500-m03:/home/docker/cp-test.txt                                                        | multinode-611500 | minikube6\jenkins | v1.35.0 | 09 Apr 25 01:04 UTC | 09 Apr 25 01:04 UTC |
	|         | multinode-611500-m02:/home/docker/cp-test_multinode-611500-m03_multinode-611500-m02.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-611500 ssh -n                                                                                                  | multinode-611500 | minikube6\jenkins | v1.35.0 | 09 Apr 25 01:04 UTC | 09 Apr 25 01:05 UTC |
	|         | multinode-611500-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-611500 ssh -n multinode-611500-m02 sudo cat                                                                    | multinode-611500 | minikube6\jenkins | v1.35.0 | 09 Apr 25 01:05 UTC | 09 Apr 25 01:05 UTC |
	|         | /home/docker/cp-test_multinode-611500-m03_multinode-611500-m02.txt                                                       |                  |                   |         |                     |                     |
	| node    | multinode-611500 node stop m03                                                                                           | multinode-611500 | minikube6\jenkins | v1.35.0 | 09 Apr 25 01:05 UTC | 09 Apr 25 01:05 UTC |
	| node    | multinode-611500 node start                                                                                              | multinode-611500 | minikube6\jenkins | v1.35.0 | 09 Apr 25 01:06 UTC | 09 Apr 25 01:09 UTC |
	|         | m03 -v=7 --alsologtostderr                                                                                               |                  |                   |         |                     |                     |
	| node    | list -p multinode-611500                                                                                                 | multinode-611500 | minikube6\jenkins | v1.35.0 | 09 Apr 25 01:09 UTC |                     |
	| stop    | -p multinode-611500                                                                                                      | multinode-611500 | minikube6\jenkins | v1.35.0 | 09 Apr 25 01:09 UTC | 09 Apr 25 01:11 UTC |
	| start   | -p multinode-611500                                                                                                      | multinode-611500 | minikube6\jenkins | v1.35.0 | 09 Apr 25 01:11 UTC |                     |
	|         | --wait=true -v=8                                                                                                         |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                        |                  |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/09 01:11:24
	Running on machine: minikube6
	Binary: Built with gc go1.24.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0409 01:11:24.044830    7488 out.go:345] Setting OutFile to fd 1980 ...
	I0409 01:11:24.130740    7488 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0409 01:11:24.130740    7488 out.go:358] Setting ErrFile to fd 1672...
	I0409 01:11:24.130740    7488 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0409 01:11:24.151836    7488 out.go:352] Setting JSON to false
	I0409 01:11:24.156000    7488 start.go:129] hostinfo: {"hostname":"minikube6","uptime":18081,"bootTime":1744143002,"procs":178,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5679 Build 19045.5679","kernelVersion":"10.0.19045.5679 Build 19045.5679","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0409 01:11:24.156000    7488 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0409 01:11:24.324550    7488 out.go:177] * [multinode-611500] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
	I0409 01:11:24.354536    7488 notify.go:220] Checking for updates...
	I0409 01:11:24.362841    7488 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0409 01:11:24.395036    7488 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0409 01:11:24.408614    7488 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0409 01:11:24.425062    7488 out.go:177]   - MINIKUBE_LOCATION=20501
	I0409 01:11:24.438855    7488 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0409 01:11:24.451306    7488 config.go:182] Loaded profile config "multinode-611500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0409 01:11:24.452017    7488 driver.go:404] Setting default libvirt URI to qemu:///system
	I0409 01:11:29.922334    7488 out.go:177] * Using the hyperv driver based on existing profile
	I0409 01:11:29.948325    7488 start.go:297] selected driver: hyperv
	I0409 01:11:29.948452    7488 start.go:901] validating driver "hyperv" against &{Name:multinode-611500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 Cluste
rName:multinode-611500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.113.157 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.113.143 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.116.185 Port:0 KubernetesVersion:v1.32.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0409 01:11:29.948663    7488 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0409 01:11:30.004917    7488 start_flags.go:975] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0409 01:11:30.005918    7488 cni.go:84] Creating CNI manager for ""
	I0409 01:11:30.005918    7488 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0409 01:11:30.005918    7488 start.go:340] cluster config:
	{Name:multinode-611500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:multinode-611500 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.113.157 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.113.143 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.116.185 Port:0 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:fa
lse kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0409 01:11:30.005918    7488 iso.go:125] acquiring lock: {Name:mk49322cc4182124f5e9cd1631076166a7ff463d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0409 01:11:30.136771    7488 out.go:177] * Starting "multinode-611500" primary control-plane node in "multinode-611500" cluster
	I0409 01:11:30.145142    7488 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0409 01:11:30.146093    7488 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
	I0409 01:11:30.146243    7488 cache.go:56] Caching tarball of preloaded images
	I0409 01:11:30.146570    7488 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0409 01:11:30.146570    7488 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0409 01:11:30.146570    7488 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\config.json ...
	I0409 01:11:30.149567    7488 start.go:360] acquireMachinesLock for multinode-611500: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0409 01:11:30.150214    7488 start.go:364] duration metric: took 544.2µs to acquireMachinesLock for "multinode-611500"
	I0409 01:11:30.150311    7488 start.go:96] Skipping create...Using existing machine configuration
	I0409 01:11:30.150311    7488 fix.go:54] fixHost starting: 
	I0409 01:11:30.151053    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:11:32.900054    7488 main.go:141] libmachine: [stdout =====>] : Off
	
	I0409 01:11:32.900054    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:11:32.900054    7488 fix.go:112] recreateIfNeeded on multinode-611500: state=Stopped err=<nil>
	W0409 01:11:32.900054    7488 fix.go:138] unexpected machine state, will restart: <nil>
	I0409 01:11:32.930172    7488 out.go:177] * Restarting existing hyperv VM for "multinode-611500" ...
	I0409 01:11:32.936482    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-611500
	I0409 01:11:35.982504    7488 main.go:141] libmachine: [stdout =====>] : 
	I0409 01:11:35.982976    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:11:35.982976    7488 main.go:141] libmachine: Waiting for host to start...
	I0409 01:11:35.982976    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:11:38.272777    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:11:38.272777    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:11:38.273894    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:11:40.819375    7488 main.go:141] libmachine: [stdout =====>] : 
	I0409 01:11:40.820147    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:11:41.820441    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:11:44.047000    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:11:44.047000    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:11:44.047000    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:11:46.572397    7488 main.go:141] libmachine: [stdout =====>] : 
	I0409 01:11:46.572397    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:11:47.573396    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:11:49.712609    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:11:49.713540    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:11:49.713856    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:11:52.188977    7488 main.go:141] libmachine: [stdout =====>] : 
	I0409 01:11:52.188977    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:11:53.190504    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:11:55.366168    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:11:55.367133    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:11:55.367133    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:11:57.859957    7488 main.go:141] libmachine: [stdout =====>] : 
	I0409 01:11:57.859957    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:11:58.860533    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:12:01.040095    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:12:01.040179    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:01.040179    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:12:03.572733    7488 main.go:141] libmachine: [stdout =====>] : 192.168.120.172
	
	I0409 01:12:03.572733    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:03.576789    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:12:05.657928    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:12:05.657972    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:05.658080    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:12:08.130573    7488 main.go:141] libmachine: [stdout =====>] : 192.168.120.172
	
	I0409 01:12:08.131079    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:08.131438    7488 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\config.json ...
	I0409 01:12:08.134499    7488 machine.go:93] provisionDockerMachine start ...
	I0409 01:12:08.134499    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:12:10.219873    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:12:10.220119    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:10.220254    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:12:12.707795    7488 main.go:141] libmachine: [stdout =====>] : 192.168.120.172
	
	I0409 01:12:12.707795    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:12.714004    7488 main.go:141] libmachine: Using SSH client type: native
	I0409 01:12:12.714158    7488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.120.172 22 <nil> <nil>}
	I0409 01:12:12.714778    7488 main.go:141] libmachine: About to run SSH command:
	hostname
	I0409 01:12:12.852142    7488 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0409 01:12:12.852233    7488 buildroot.go:166] provisioning hostname "multinode-611500"
	I0409 01:12:12.852321    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:12:14.927391    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:12:14.928151    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:14.928151    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:12:17.391452    7488 main.go:141] libmachine: [stdout =====>] : 192.168.120.172
	
	I0409 01:12:17.391452    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:17.399339    7488 main.go:141] libmachine: Using SSH client type: native
	I0409 01:12:17.399683    7488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.120.172 22 <nil> <nil>}
	I0409 01:12:17.399683    7488 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-611500 && echo "multinode-611500" | sudo tee /etc/hostname
	I0409 01:12:17.569282    7488 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-611500
	
	I0409 01:12:17.569412    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:12:19.659808    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:12:19.659808    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:19.660517    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:12:22.079274    7488 main.go:141] libmachine: [stdout =====>] : 192.168.120.172
	
	I0409 01:12:22.079274    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:22.085484    7488 main.go:141] libmachine: Using SSH client type: native
	I0409 01:12:22.085603    7488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.120.172 22 <nil> <nil>}
	I0409 01:12:22.086226    7488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-611500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-611500/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-611500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0409 01:12:22.238700    7488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0409 01:12:22.238834    7488 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0409 01:12:22.238949    7488 buildroot.go:174] setting up certificates
	I0409 01:12:22.238949    7488 provision.go:84] configureAuth start
	I0409 01:12:22.239046    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:12:24.286455    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:12:24.286455    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:24.286843    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:12:26.725682    7488 main.go:141] libmachine: [stdout =====>] : 192.168.120.172
	
	I0409 01:12:26.726409    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:26.726520    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:12:28.873228    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:12:28.873228    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:28.873798    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:12:31.373353    7488 main.go:141] libmachine: [stdout =====>] : 192.168.120.172
	
	I0409 01:12:31.373353    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:31.373913    7488 provision.go:143] copyHostCerts
	I0409 01:12:31.374115    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0409 01:12:31.374463    7488 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0409 01:12:31.374550    7488 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0409 01:12:31.375120    7488 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0409 01:12:31.376717    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0409 01:12:31.376932    7488 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0409 01:12:31.376932    7488 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0409 01:12:31.377469    7488 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0409 01:12:31.378774    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0409 01:12:31.378934    7488 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0409 01:12:31.378934    7488 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0409 01:12:31.378934    7488 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0409 01:12:31.380372    7488 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-611500 san=[127.0.0.1 192.168.120.172 localhost minikube multinode-611500]
	I0409 01:12:31.821702    7488 provision.go:177] copyRemoteCerts
	I0409 01:12:31.834522    7488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0409 01:12:31.834752    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:12:33.956514    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:12:33.956875    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:33.956875    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:12:36.445084    7488 main.go:141] libmachine: [stdout =====>] : 192.168.120.172
	
	I0409 01:12:36.445084    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:36.446048    7488 sshutil.go:53] new ssh client: &{IP:192.168.120.172 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-611500\id_rsa Username:docker}
	I0409 01:12:36.557082    7488 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7224408s)
	I0409 01:12:36.557137    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0409 01:12:36.557290    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0409 01:12:36.602221    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0409 01:12:36.602221    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0409 01:12:36.650714    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0409 01:12:36.651283    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0409 01:12:36.696514    7488 provision.go:87] duration metric: took 14.4572627s to configureAuth
	I0409 01:12:36.696577    7488 buildroot.go:189] setting minikube options for container-runtime
	I0409 01:12:36.697710    7488 config.go:182] Loaded profile config "multinode-611500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0409 01:12:36.697870    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:12:38.822850    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:12:38.823351    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:38.823351    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:12:41.275713    7488 main.go:141] libmachine: [stdout =====>] : 192.168.120.172
	
	I0409 01:12:41.275713    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:41.282250    7488 main.go:141] libmachine: Using SSH client type: native
	I0409 01:12:41.282528    7488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.120.172 22 <nil> <nil>}
	I0409 01:12:41.282528    7488 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0409 01:12:41.415451    7488 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0409 01:12:41.415451    7488 buildroot.go:70] root file system type: tmpfs
	I0409 01:12:41.415744    7488 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0409 01:12:41.415850    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:12:43.465288    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:12:43.465288    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:43.466018    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:12:45.951733    7488 main.go:141] libmachine: [stdout =====>] : 192.168.120.172
	
	I0409 01:12:45.951733    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:45.957735    7488 main.go:141] libmachine: Using SSH client type: native
	I0409 01:12:45.958266    7488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.120.172 22 <nil> <nil>}
	I0409 01:12:45.958565    7488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0409 01:12:46.127008    7488 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0409 01:12:46.127008    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:12:48.234237    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:12:48.234237    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:48.234664    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:12:50.717069    7488 main.go:141] libmachine: [stdout =====>] : 192.168.120.172
	
	I0409 01:12:50.717176    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:50.724829    7488 main.go:141] libmachine: Using SSH client type: native
	I0409 01:12:50.725610    7488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.120.172 22 <nil> <nil>}
	I0409 01:12:50.725610    7488 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0409 01:12:53.352381    7488 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0409 01:12:53.352381    7488 machine.go:96] duration metric: took 45.2173037s to provisionDockerMachine
	I0409 01:12:53.352381    7488 start.go:293] postStartSetup for "multinode-611500" (driver="hyperv")
	I0409 01:12:53.352381    7488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0409 01:12:53.365715    7488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0409 01:12:53.365715    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:12:55.543657    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:12:55.543731    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:55.543903    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:12:58.014968    7488 main.go:141] libmachine: [stdout =====>] : 192.168.120.172
	
	I0409 01:12:58.014968    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:12:58.015749    7488 sshutil.go:53] new ssh client: &{IP:192.168.120.172 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-611500\id_rsa Username:docker}
	I0409 01:12:58.132179    7488 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7662987s)
	I0409 01:12:58.147237    7488 ssh_runner.go:195] Run: cat /etc/os-release
	I0409 01:12:58.158008    7488 command_runner.go:130] > NAME=Buildroot
	I0409 01:12:58.158008    7488 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0409 01:12:58.158008    7488 command_runner.go:130] > ID=buildroot
	I0409 01:12:58.158008    7488 command_runner.go:130] > VERSION_ID=2023.02.9
	I0409 01:12:58.158008    7488 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0409 01:12:58.158008    7488 info.go:137] Remote host: Buildroot 2023.02.9
	I0409 01:12:58.158008    7488 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0409 01:12:58.158008    7488 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0409 01:12:58.159040    7488 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> 98642.pem in /etc/ssl/certs
	I0409 01:12:58.159040    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> /etc/ssl/certs/98642.pem
	I0409 01:12:58.173318    7488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0409 01:12:58.196642    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem --> /etc/ssl/certs/98642.pem (1708 bytes)
	I0409 01:12:58.242208    7488 start.go:296] duration metric: took 4.889764s for postStartSetup
	I0409 01:12:58.242334    7488 fix.go:56] duration metric: took 1m28.0908954s for fixHost
	I0409 01:12:58.242334    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:13:00.371136    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:13:00.372044    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:13:00.372350    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:13:02.917808    7488 main.go:141] libmachine: [stdout =====>] : 192.168.120.172
	
	I0409 01:13:02.918446    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:13:02.924303    7488 main.go:141] libmachine: Using SSH client type: native
	I0409 01:13:02.924447    7488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.120.172 22 <nil> <nil>}
	I0409 01:13:02.924447    7488 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0409 01:13:03.055465    7488 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744161183.075461872
	
	I0409 01:13:03.055598    7488 fix.go:216] guest clock: 1744161183.075461872
	I0409 01:13:03.055598    7488 fix.go:229] Guest: 2025-04-09 01:13:03.075461872 +0000 UTC Remote: 2025-04-09 01:12:58.242334 +0000 UTC m=+94.294803901 (delta=4.833127872s)
	I0409 01:13:03.055750    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:13:05.187244    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:13:05.187244    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:13:05.187834    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:13:07.706514    7488 main.go:141] libmachine: [stdout =====>] : 192.168.120.172
	
	I0409 01:13:07.706786    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:13:07.712868    7488 main.go:141] libmachine: Using SSH client type: native
	I0409 01:13:07.712868    7488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.120.172 22 <nil> <nil>}
	I0409 01:13:07.712868    7488 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1744161183
	I0409 01:13:07.856863    7488 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed Apr  9 01:13:03 UTC 2025
	
	I0409 01:13:07.856863    7488 fix.go:236] clock set: Wed Apr  9 01:13:03 UTC 2025
	 (err=<nil>)
	I0409 01:13:07.856863    7488 start.go:83] releasing machines lock for "multinode-611500", held for 1m37.7053676s
	I0409 01:13:07.857474    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:13:09.970430    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:13:09.970430    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:13:09.971541    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:13:12.498657    7488 main.go:141] libmachine: [stdout =====>] : 192.168.120.172
	
	I0409 01:13:12.498657    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:13:12.503585    7488 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0409 01:13:12.503735    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:13:12.513069    7488 ssh_runner.go:195] Run: cat /version.json
	I0409 01:13:12.513069    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:13:14.726777    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:13:14.726963    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:13:14.726963    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:13:14.727044    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:13:14.727044    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:13:14.727044    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:13:17.335662    7488 main.go:141] libmachine: [stdout =====>] : 192.168.120.172
	
	I0409 01:13:17.336313    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:13:17.336313    7488 sshutil.go:53] new ssh client: &{IP:192.168.120.172 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-611500\id_rsa Username:docker}
	I0409 01:13:17.365635    7488 main.go:141] libmachine: [stdout =====>] : 192.168.120.172
	
	I0409 01:13:17.365635    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:13:17.366275    7488 sshutil.go:53] new ssh client: &{IP:192.168.120.172 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-611500\id_rsa Username:docker}
	I0409 01:13:17.430552    7488 command_runner.go:130] > {"iso_version": "v1.35.0", "kicbase_version": "v0.0.45-1736763277-20236", "minikube_version": "v1.35.0", "commit": "3fb24bd87c8c8761e2515e1a9ee13835a389ed68"}
	I0409 01:13:17.430728    7488 ssh_runner.go:235] Completed: cat /version.json: (4.9175958s)
	I0409 01:13:17.442251    7488 ssh_runner.go:195] Run: systemctl --version
	I0409 01:13:17.446195    7488 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0409 01:13:17.447450    7488 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9436641s)
	W0409 01:13:17.447450    7488 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0409 01:13:17.455280    7488 command_runner.go:130] > systemd 252 (252)
	I0409 01:13:17.455280    7488 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0409 01:13:17.467523    7488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0409 01:13:17.475405    7488 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0409 01:13:17.476493    7488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0409 01:13:17.485811    7488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0409 01:13:17.516740    7488 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0409 01:13:17.516740    7488 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0409 01:13:17.516740    7488 start.go:495] detecting cgroup driver to use...
	I0409 01:13:17.516740    7488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0409 01:13:17.548230    7488 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	W0409 01:13:17.560986    7488 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0409 01:13:17.560986    7488 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0409 01:13:17.562333    7488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0409 01:13:17.591510    7488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0409 01:13:17.610371    7488 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0409 01:13:17.621732    7488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0409 01:13:17.650746    7488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0409 01:13:17.681949    7488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0409 01:13:17.710530    7488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0409 01:13:17.741508    7488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0409 01:13:17.770114    7488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0409 01:13:17.802673    7488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0409 01:13:17.833932    7488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0409 01:13:17.864420    7488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0409 01:13:17.881103    7488 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0409 01:13:17.881361    7488 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0409 01:13:17.893007    7488 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0409 01:13:17.929138    7488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0409 01:13:17.955430    7488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0409 01:13:18.138081    7488 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0409 01:13:18.167441    7488 start.go:495] detecting cgroup driver to use...
	I0409 01:13:18.177442    7488 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0409 01:13:18.200777    7488 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0409 01:13:18.200777    7488 command_runner.go:130] > [Unit]
	I0409 01:13:18.200777    7488 command_runner.go:130] > Description=Docker Application Container Engine
	I0409 01:13:18.200777    7488 command_runner.go:130] > Documentation=https://docs.docker.com
	I0409 01:13:18.200777    7488 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0409 01:13:18.200777    7488 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0409 01:13:18.200963    7488 command_runner.go:130] > StartLimitBurst=3
	I0409 01:13:18.201002    7488 command_runner.go:130] > StartLimitIntervalSec=60
	I0409 01:13:18.201002    7488 command_runner.go:130] > [Service]
	I0409 01:13:18.201002    7488 command_runner.go:130] > Type=notify
	I0409 01:13:18.201002    7488 command_runner.go:130] > Restart=on-failure
	I0409 01:13:18.201049    7488 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0409 01:13:18.201049    7488 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0409 01:13:18.201083    7488 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0409 01:13:18.201083    7488 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0409 01:13:18.201083    7488 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0409 01:13:18.201133    7488 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0409 01:13:18.201133    7488 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0409 01:13:18.201174    7488 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0409 01:13:18.201219    7488 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0409 01:13:18.201260    7488 command_runner.go:130] > ExecStart=
	I0409 01:13:18.201411    7488 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0409 01:13:18.201471    7488 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0409 01:13:18.201471    7488 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0409 01:13:18.201502    7488 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0409 01:13:18.201502    7488 command_runner.go:130] > LimitNOFILE=infinity
	I0409 01:13:18.201557    7488 command_runner.go:130] > LimitNPROC=infinity
	I0409 01:13:18.201557    7488 command_runner.go:130] > LimitCORE=infinity
	I0409 01:13:18.201557    7488 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0409 01:13:18.201598    7488 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0409 01:13:18.201598    7488 command_runner.go:130] > TasksMax=infinity
	I0409 01:13:18.201598    7488 command_runner.go:130] > TimeoutStartSec=0
	I0409 01:13:18.201644    7488 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0409 01:13:18.201644    7488 command_runner.go:130] > Delegate=yes
	I0409 01:13:18.201644    7488 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0409 01:13:18.201644    7488 command_runner.go:130] > KillMode=process
	I0409 01:13:18.201684    7488 command_runner.go:130] > [Install]
	I0409 01:13:18.201684    7488 command_runner.go:130] > WantedBy=multi-user.target
	I0409 01:13:18.213302    7488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0409 01:13:18.245337    7488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0409 01:13:18.294101    7488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0409 01:13:18.326585    7488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0409 01:13:18.379052    7488 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0409 01:13:18.448069    7488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0409 01:13:18.475123    7488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0409 01:13:18.509575    7488 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0409 01:13:18.520150    7488 ssh_runner.go:195] Run: which cri-dockerd
	I0409 01:13:18.526211    7488 command_runner.go:130] > /usr/bin/cri-dockerd
	I0409 01:13:18.538927    7488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0409 01:13:18.556154    7488 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0409 01:13:18.605691    7488 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0409 01:13:18.804543    7488 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0409 01:13:18.979273    7488 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0409 01:13:18.979273    7488 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0409 01:13:19.028804    7488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0409 01:13:19.215662    7488 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0409 01:13:21.915269    7488 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6995716s)
	I0409 01:13:21.926704    7488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0409 01:13:21.964157    7488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0409 01:13:21.999196    7488 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0409 01:13:22.203016    7488 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0409 01:13:22.387131    7488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0409 01:13:22.584835    7488 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0409 01:13:22.623645    7488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0409 01:13:22.654650    7488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0409 01:13:22.857009    7488 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0409 01:13:22.964438    7488 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0409 01:13:22.975931    7488 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0409 01:13:22.985074    7488 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0409 01:13:22.985074    7488 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0409 01:13:22.985074    7488 command_runner.go:130] > Device: 0,22	Inode: 842         Links: 1
	I0409 01:13:22.985074    7488 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0409 01:13:22.985074    7488 command_runner.go:130] > Access: 2025-04-09 01:13:22.900320156 +0000
	I0409 01:13:22.985289    7488 command_runner.go:130] > Modify: 2025-04-09 01:13:22.900320156 +0000
	I0409 01:13:22.985338    7488 command_runner.go:130] > Change: 2025-04-09 01:13:22.904320186 +0000
	I0409 01:13:22.985338    7488 command_runner.go:130] >  Birth: -
	I0409 01:13:22.985446    7488 start.go:563] Will wait 60s for crictl version
	I0409 01:13:22.995543    7488 ssh_runner.go:195] Run: which crictl
	I0409 01:13:23.001057    7488 command_runner.go:130] > /usr/bin/crictl
	I0409 01:13:23.012641    7488 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0409 01:13:23.060711    7488 command_runner.go:130] > Version:  0.1.0
	I0409 01:13:23.060711    7488 command_runner.go:130] > RuntimeName:  docker
	I0409 01:13:23.060711    7488 command_runner.go:130] > RuntimeVersion:  27.4.0
	I0409 01:13:23.060711    7488 command_runner.go:130] > RuntimeApiVersion:  v1
	I0409 01:13:23.060711    7488 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0409 01:13:23.070324    7488 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0409 01:13:23.101284    7488 command_runner.go:130] > 27.4.0
	I0409 01:13:23.110132    7488 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0409 01:13:23.147143    7488 command_runner.go:130] > 27.4.0
	I0409 01:13:23.153437    7488 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 27.4.0 ...
	I0409 01:13:23.153437    7488 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0409 01:13:23.157802    7488 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0409 01:13:23.157802    7488 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0409 01:13:23.157802    7488 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0409 01:13:23.157802    7488 ip.go:211] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:f4:da:75 Flags:up|broadcast|multicast|running}
	I0409 01:13:23.161691    7488 ip.go:214] interface addr: fe80::e8ab:9cc6:22b1:a5fc/64
	I0409 01:13:23.161835    7488 ip.go:214] interface addr: 192.168.112.1/20
	I0409 01:13:23.172408    7488 ssh_runner.go:195] Run: grep 192.168.112.1	host.minikube.internal$ /etc/hosts
	I0409 01:13:23.178060    7488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.112.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0409 01:13:23.203114    7488 kubeadm.go:883] updating cluster {Name:multinode-611500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:multinode-6
11500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.120.172 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.113.143 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.116.185 Port:0 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false ins
pektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0409 01:13:23.203114    7488 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0409 01:13:23.214847    7488 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0409 01:13:23.241950    7488 command_runner.go:130] > kindest/kindnetd:v20250214-acbabc1a
	I0409 01:13:23.241950    7488 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.32.2
	I0409 01:13:23.241950    7488 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.32.2
	I0409 01:13:23.241950    7488 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.32.2
	I0409 01:13:23.241950    7488 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.32.2
	I0409 01:13:23.241950    7488 command_runner.go:130] > registry.k8s.io/etcd:3.5.16-0
	I0409 01:13:23.241950    7488 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0409 01:13:23.241950    7488 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0409 01:13:23.241950    7488 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0409 01:13:23.241950    7488 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0409 01:13:23.241950    7488 docker.go:689] Got preloaded images: -- stdout --
	kindest/kindnetd:v20250214-acbabc1a
	registry.k8s.io/kube-apiserver:v1.32.2
	registry.k8s.io/kube-scheduler:v1.32.2
	registry.k8s.io/kube-proxy:v1.32.2
	registry.k8s.io/kube-controller-manager:v1.32.2
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0409 01:13:23.241950    7488 docker.go:619] Images already preloaded, skipping extraction
	I0409 01:13:23.252482    7488 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0409 01:13:23.278026    7488 command_runner.go:130] > kindest/kindnetd:v20250214-acbabc1a
	I0409 01:13:23.278026    7488 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.32.2
	I0409 01:13:23.278026    7488 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.32.2
	I0409 01:13:23.278026    7488 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.32.2
	I0409 01:13:23.278026    7488 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.32.2
	I0409 01:13:23.278026    7488 command_runner.go:130] > registry.k8s.io/etcd:3.5.16-0
	I0409 01:13:23.278026    7488 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0409 01:13:23.278026    7488 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0409 01:13:23.278026    7488 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0409 01:13:23.278026    7488 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0409 01:13:23.278026    7488 docker.go:689] Got preloaded images: -- stdout --
	kindest/kindnetd:v20250214-acbabc1a
	registry.k8s.io/kube-apiserver:v1.32.2
	registry.k8s.io/kube-proxy:v1.32.2
	registry.k8s.io/kube-controller-manager:v1.32.2
	registry.k8s.io/kube-scheduler:v1.32.2
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0409 01:13:23.278026    7488 cache_images.go:84] Images are preloaded, skipping loading
	I0409 01:13:23.278026    7488 kubeadm.go:934] updating node { 192.168.120.172 8443 v1.32.2 docker true true} ...
	I0409 01:13:23.278629    7488 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-611500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.120.172
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:multinode-611500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0409 01:13:23.289105    7488 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0409 01:13:23.350732    7488 command_runner.go:130] > cgroupfs
	I0409 01:13:23.350907    7488 cni.go:84] Creating CNI manager for ""
	I0409 01:13:23.350996    7488 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0409 01:13:23.351063    7488 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0409 01:13:23.351170    7488 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.120.172 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-611500 NodeName:multinode-611500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.120.172"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.120.172 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0409 01:13:23.351467    7488 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.120.172
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-611500"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.120.172"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.120.172"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0409 01:13:23.363388    7488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0409 01:13:23.380948    7488 command_runner.go:130] > kubeadm
	I0409 01:13:23.380948    7488 command_runner.go:130] > kubectl
	I0409 01:13:23.380948    7488 command_runner.go:130] > kubelet
	I0409 01:13:23.380948    7488 binaries.go:44] Found k8s binaries, skipping transfer
	I0409 01:13:23.390929    7488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0409 01:13:23.406058    7488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0409 01:13:23.435463    7488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0409 01:13:23.462952    7488 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2303 bytes)
	I0409 01:13:23.504629    7488 ssh_runner.go:195] Run: grep 192.168.120.172	control-plane.minikube.internal$ /etc/hosts
	I0409 01:13:23.511090    7488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.120.172	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0409 01:13:23.547217    7488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0409 01:13:23.724250    7488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0409 01:13:23.753999    7488 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500 for IP: 192.168.120.172
	I0409 01:13:23.754125    7488 certs.go:194] generating shared ca certs ...
	I0409 01:13:23.754217    7488 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 01:13:23.754566    7488 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0409 01:13:23.755228    7488 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0409 01:13:23.755228    7488 certs.go:256] generating profile certs ...
	I0409 01:13:23.756710    7488 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\client.key
	I0409 01:13:23.756710    7488 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.key.70495b6d
	I0409 01:13:23.756710    7488 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.crt.70495b6d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.120.172]
	I0409 01:13:23.873720    7488 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.crt.70495b6d ...
	I0409 01:13:23.873720    7488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.crt.70495b6d: {Name:mk1f0b0fb179e64b9d993ea458f993460d72ba51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 01:13:23.875143    7488 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.key.70495b6d ...
	I0409 01:13:23.875143    7488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.key.70495b6d: {Name:mk56ffa6364a87645628d6f8b747da00a5a3e3e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 01:13:23.876159    7488 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.crt.70495b6d -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.crt
	I0409 01:13:23.891858    7488 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.key.70495b6d -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.key
	I0409 01:13:23.893466    7488 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\proxy-client.key
	I0409 01:13:23.893466    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0409 01:13:23.893611    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0409 01:13:23.893851    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0409 01:13:23.894032    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0409 01:13:23.894092    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0409 01:13:23.894092    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0409 01:13:23.894839    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0409 01:13:23.895160    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0409 01:13:23.895477    7488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9864.pem (1338 bytes)
	W0409 01:13:23.895477    7488 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9864_empty.pem, impossibly tiny 0 bytes
	I0409 01:13:23.896020    7488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0409 01:13:23.896175    7488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0409 01:13:23.896175    7488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0409 01:13:23.905841    7488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0409 01:13:23.906767    7488 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem (1708 bytes)
	I0409 01:13:23.907162    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> /usr/share/ca-certificates/98642.pem
	I0409 01:13:23.907374    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0409 01:13:23.907374    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9864.pem -> /usr/share/ca-certificates/9864.pem
	I0409 01:13:23.908798    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0409 01:13:23.963556    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0409 01:13:24.009290    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0409 01:13:24.069626    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0409 01:13:24.115539    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0409 01:13:24.162954    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0409 01:13:24.208550    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0409 01:13:24.255232    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0409 01:13:24.300410    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem --> /usr/share/ca-certificates/98642.pem (1708 bytes)
	I0409 01:13:24.346151    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0409 01:13:24.390876    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9864.pem --> /usr/share/ca-certificates/9864.pem (1338 bytes)
	I0409 01:13:24.438287    7488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0409 01:13:24.482063    7488 ssh_runner.go:195] Run: openssl version
	I0409 01:13:24.488753    7488 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0409 01:13:24.497752    7488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9864.pem && ln -fs /usr/share/ca-certificates/9864.pem /etc/ssl/certs/9864.pem"
	I0409 01:13:24.528067    7488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9864.pem
	I0409 01:13:24.535031    7488 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr  8 23:04 /usr/share/ca-certificates/9864.pem
	I0409 01:13:24.535119    7488 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  8 23:04 /usr/share/ca-certificates/9864.pem
	I0409 01:13:24.546279    7488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9864.pem
	I0409 01:13:24.554340    7488 command_runner.go:130] > 51391683
	I0409 01:13:24.565665    7488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9864.pem /etc/ssl/certs/51391683.0"
	I0409 01:13:24.594717    7488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/98642.pem && ln -fs /usr/share/ca-certificates/98642.pem /etc/ssl/certs/98642.pem"
	I0409 01:13:24.624528    7488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/98642.pem
	I0409 01:13:24.631397    7488 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr  8 23:04 /usr/share/ca-certificates/98642.pem
	I0409 01:13:24.631397    7488 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  8 23:04 /usr/share/ca-certificates/98642.pem
	I0409 01:13:24.643699    7488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/98642.pem
	I0409 01:13:24.651714    7488 command_runner.go:130] > 3ec20f2e
	I0409 01:13:24.666302    7488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/98642.pem /etc/ssl/certs/3ec20f2e.0"
	I0409 01:13:24.695383    7488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0409 01:13:24.726818    7488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0409 01:13:24.735662    7488 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr  8 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0409 01:13:24.735662    7488 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  8 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0409 01:13:24.747336    7488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0409 01:13:24.755788    7488 command_runner.go:130] > b5213941
	I0409 01:13:24.768257    7488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0409 01:13:24.799528    7488 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0409 01:13:24.807326    7488 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0409 01:13:24.807326    7488 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0409 01:13:24.807415    7488 command_runner.go:130] > Device: 8,1	Inode: 5242721     Links: 1
	I0409 01:13:24.807415    7488 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0409 01:13:24.807415    7488 command_runner.go:130] > Access: 2025-04-09 00:49:09.242960801 +0000
	I0409 01:13:24.807415    7488 command_runner.go:130] > Modify: 2025-04-09 00:49:09.242960801 +0000
	I0409 01:13:24.807457    7488 command_runner.go:130] > Change: 2025-04-09 00:49:09.242960801 +0000
	I0409 01:13:24.807457    7488 command_runner.go:130] >  Birth: 2025-04-09 00:49:09.242960801 +0000
	I0409 01:13:24.818924    7488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0409 01:13:24.826916    7488 command_runner.go:130] > Certificate will not expire
	I0409 01:13:24.837905    7488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0409 01:13:24.846354    7488 command_runner.go:130] > Certificate will not expire
	I0409 01:13:24.858754    7488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0409 01:13:24.868312    7488 command_runner.go:130] > Certificate will not expire
	I0409 01:13:24.881994    7488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0409 01:13:24.891299    7488 command_runner.go:130] > Certificate will not expire
	I0409 01:13:24.902666    7488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0409 01:13:24.911985    7488 command_runner.go:130] > Certificate will not expire
	I0409 01:13:24.923638    7488 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0409 01:13:24.932761    7488 command_runner.go:130] > Certificate will not expire
	I0409 01:13:24.932761    7488 kubeadm.go:392] StartCluster: {Name:multinode-611500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:multinode-6115
00 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.120.172 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.113.143 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.116.185 Port:0 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspek
tor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0409 01:13:24.942035    7488 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0409 01:13:24.979620    7488 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0409 01:13:25.000771    7488 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0409 01:13:25.000771    7488 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0409 01:13:25.000771    7488 command_runner.go:130] > /var/lib/minikube/etcd:
	I0409 01:13:25.000771    7488 command_runner.go:130] > member
	I0409 01:13:25.000771    7488 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0409 01:13:25.000771    7488 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0409 01:13:25.012426    7488 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0409 01:13:25.037358    7488 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0409 01:13:25.039189    7488 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-611500" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0409 01:13:25.040108    7488 kubeconfig.go:62] C:\Users\jenkins.minikube6\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-611500" cluster setting kubeconfig missing "multinode-611500" context setting]
	I0409 01:13:25.040777    7488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 01:13:25.059735    7488 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0409 01:13:25.060293    7488 kapi.go:59] client config for multinode-611500: &rest.Config{Host:"https://192.168.120.172:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-611500/client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-611500/client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2809400), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0409 01:13:25.062008    7488 cert_rotation.go:140] Starting client certificate rotation controller
	I0409 01:13:25.062008    7488 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0409 01:13:25.062008    7488 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0409 01:13:25.062008    7488 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0409 01:13:25.062008    7488 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0409 01:13:25.072273    7488 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0409 01:13:25.089753    7488 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0409 01:13:25.089827    7488 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0409 01:13:25.089827    7488 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0409 01:13:25.089827    7488 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta4
	I0409 01:13:25.089877    7488 command_runner.go:130] >  kind: InitConfiguration
	I0409 01:13:25.089877    7488 command_runner.go:130] >  localAPIEndpoint:
	I0409 01:13:25.089877    7488 command_runner.go:130] > -  advertiseAddress: 192.168.113.157
	I0409 01:13:25.089877    7488 command_runner.go:130] > +  advertiseAddress: 192.168.120.172
	I0409 01:13:25.089877    7488 command_runner.go:130] >    bindPort: 8443
	I0409 01:13:25.089948    7488 command_runner.go:130] >  bootstrapTokens:
	I0409 01:13:25.089948    7488 command_runner.go:130] >    - groups:
	I0409 01:13:25.089948    7488 command_runner.go:130] > @@ -15,13 +15,13 @@
	I0409 01:13:25.089948    7488 command_runner.go:130] >    name: "multinode-611500"
	I0409 01:13:25.089948    7488 command_runner.go:130] >    kubeletExtraArgs:
	I0409 01:13:25.089948    7488 command_runner.go:130] >      - name: "node-ip"
	I0409 01:13:25.089948    7488 command_runner.go:130] > -      value: "192.168.113.157"
	I0409 01:13:25.089948    7488 command_runner.go:130] > +      value: "192.168.120.172"
	I0409 01:13:25.090133    7488 command_runner.go:130] >    taints: []
	I0409 01:13:25.090133    7488 command_runner.go:130] >  ---
	I0409 01:13:25.090133    7488 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta4
	I0409 01:13:25.090133    7488 command_runner.go:130] >  kind: ClusterConfiguration
	I0409 01:13:25.090133    7488 command_runner.go:130] >  apiServer:
	I0409 01:13:25.090133    7488 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "192.168.113.157"]
	I0409 01:13:25.090133    7488 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "192.168.120.172"]
	I0409 01:13:25.090133    7488 command_runner.go:130] >    extraArgs:
	I0409 01:13:25.090133    7488 command_runner.go:130] >      - name: "enable-admission-plugins"
	I0409 01:13:25.090259    7488 command_runner.go:130] >        value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0409 01:13:25.090347    7488 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 192.168.113.157
	+  advertiseAddress: 192.168.120.172
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -15,13 +15,13 @@
	   name: "multinode-611500"
	   kubeletExtraArgs:
	     - name: "node-ip"
	-      value: "192.168.113.157"
	+      value: "192.168.120.172"
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "192.168.113.157"]
	+  certSANs: ["127.0.0.1", "localhost", "192.168.120.172"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	       value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	
	-- /stdout --
	I0409 01:13:25.090386    7488 kubeadm.go:1160] stopping kube-system containers ...
	I0409 01:13:25.099340    7488 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0409 01:13:25.131365    7488 command_runner.go:130] > 934a19227ceb
	I0409 01:13:25.131557    7488 command_runner.go:130] > 81bdf2c1b915
	I0409 01:13:25.131557    7488 command_runner.go:130] > 5709459d3357
	I0409 01:13:25.131557    7488 command_runner.go:130] > 38b71116bee4
	I0409 01:13:25.131557    7488 command_runner.go:130] > 14703ff53a0b
	I0409 01:13:25.131557    7488 command_runner.go:130] > 1a9f657c2b5a
	I0409 01:13:25.131557    7488 command_runner.go:130] > 40c7183a37ea
	I0409 01:13:25.131557    7488 command_runner.go:130] > 0a2ad19ce50f
	I0409 01:13:25.131557    7488 command_runner.go:130] > 8fec401b4d08
	I0409 01:13:25.131557    7488 command_runner.go:130] > 45eca668cef5
	I0409 01:13:25.131557    7488 command_runner.go:130] > 729d2794ba86
	I0409 01:13:25.131557    7488 command_runner.go:130] > 9698a4747b5a
	I0409 01:13:25.131557    7488 command_runner.go:130] > 77b1d88aa162
	I0409 01:13:25.131557    7488 command_runner.go:130] > ac3e2538b3ca
	I0409 01:13:25.131557    7488 command_runner.go:130] > c41f8955903a
	I0409 01:13:25.131557    7488 command_runner.go:130] > bc594b9349b9
	I0409 01:13:25.131557    7488 docker.go:483] Stopping containers: [934a19227ceb 81bdf2c1b915 5709459d3357 38b71116bee4 14703ff53a0b 1a9f657c2b5a 40c7183a37ea 0a2ad19ce50f 8fec401b4d08 45eca668cef5 729d2794ba86 9698a4747b5a 77b1d88aa162 ac3e2538b3ca c41f8955903a bc594b9349b9]
	I0409 01:13:25.141187    7488 ssh_runner.go:195] Run: docker stop 934a19227ceb 81bdf2c1b915 5709459d3357 38b71116bee4 14703ff53a0b 1a9f657c2b5a 40c7183a37ea 0a2ad19ce50f 8fec401b4d08 45eca668cef5 729d2794ba86 9698a4747b5a 77b1d88aa162 ac3e2538b3ca c41f8955903a bc594b9349b9
	I0409 01:13:25.166886    7488 command_runner.go:130] > 934a19227ceb
	I0409 01:13:25.166886    7488 command_runner.go:130] > 81bdf2c1b915
	I0409 01:13:25.166886    7488 command_runner.go:130] > 5709459d3357
	I0409 01:13:25.166886    7488 command_runner.go:130] > 38b71116bee4
	I0409 01:13:25.167004    7488 command_runner.go:130] > 14703ff53a0b
	I0409 01:13:25.167004    7488 command_runner.go:130] > 1a9f657c2b5a
	I0409 01:13:25.167004    7488 command_runner.go:130] > 40c7183a37ea
	I0409 01:13:25.167004    7488 command_runner.go:130] > 0a2ad19ce50f
	I0409 01:13:25.167004    7488 command_runner.go:130] > 8fec401b4d08
	I0409 01:13:25.167004    7488 command_runner.go:130] > 45eca668cef5
	I0409 01:13:25.167004    7488 command_runner.go:130] > 729d2794ba86
	I0409 01:13:25.167004    7488 command_runner.go:130] > 9698a4747b5a
	I0409 01:13:25.167004    7488 command_runner.go:130] > 77b1d88aa162
	I0409 01:13:25.167004    7488 command_runner.go:130] > ac3e2538b3ca
	I0409 01:13:25.167108    7488 command_runner.go:130] > c41f8955903a
	I0409 01:13:25.167108    7488 command_runner.go:130] > bc594b9349b9
	I0409 01:13:25.178188    7488 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0409 01:13:25.218391    7488 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0409 01:13:25.237526    7488 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0409 01:13:25.237526    7488 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0409 01:13:25.237526    7488 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0409 01:13:25.237526    7488 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0409 01:13:25.238661    7488 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0409 01:13:25.238661    7488 kubeadm.go:157] found existing configuration files:
	
	I0409 01:13:25.250436    7488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0409 01:13:25.274293    7488 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0409 01:13:25.276177    7488 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0409 01:13:25.287842    7488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0409 01:13:25.318654    7488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0409 01:13:25.333664    7488 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0409 01:13:25.333664    7488 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0409 01:13:25.343598    7488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0409 01:13:25.371140    7488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0409 01:13:25.387373    7488 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0409 01:13:25.388339    7488 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0409 01:13:25.400052    7488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0409 01:13:25.426641    7488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0409 01:13:25.442513    7488 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0409 01:13:25.442513    7488 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0409 01:13:25.453854    7488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0409 01:13:25.484032    7488 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0409 01:13:25.503513    7488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0409 01:13:25.827655    7488 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0409 01:13:25.827655    7488 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0409 01:13:25.827655    7488 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0409 01:13:25.827655    7488 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0409 01:13:25.827811    7488 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0409 01:13:25.827811    7488 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0409 01:13:25.827811    7488 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0409 01:13:25.827811    7488 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0409 01:13:25.827811    7488 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0409 01:13:25.827811    7488 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0409 01:13:25.827811    7488 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0409 01:13:25.827895    7488 command_runner.go:130] > [certs] Using the existing "sa" key
	I0409 01:13:25.827933    7488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0409 01:13:26.612544    7488 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0409 01:13:26.612544    7488 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0409 01:13:26.612544    7488 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0409 01:13:26.612544    7488 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0409 01:13:26.612544    7488 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0409 01:13:26.612544    7488 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0409 01:13:26.612544    7488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0409 01:13:26.940998    7488 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0409 01:13:26.941045    7488 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0409 01:13:26.941076    7488 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0409 01:13:26.941128    7488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0409 01:13:27.023729    7488 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0409 01:13:27.024531    7488 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0409 01:13:27.024531    7488 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0409 01:13:27.024531    7488 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0409 01:13:27.024575    7488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0409 01:13:27.114628    7488 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0409 01:13:27.114756    7488 api_server.go:52] waiting for apiserver process to appear ...
	I0409 01:13:27.126255    7488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0409 01:13:27.626398    7488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0409 01:13:28.125633    7488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0409 01:13:28.627686    7488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0409 01:13:29.126975    7488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0409 01:13:29.151239    7488 command_runner.go:130] > 1936
	I0409 01:13:29.151336    7488 api_server.go:72] duration metric: took 2.0366299s to wait for apiserver process to appear ...
	I0409 01:13:29.151336    7488 api_server.go:88] waiting for apiserver healthz status ...
	I0409 01:13:29.151438    7488 api_server.go:253] Checking apiserver healthz at https://192.168.120.172:8443/healthz ...
	I0409 01:13:34.152209    7488 api_server.go:269] stopped: https://192.168.120.172:8443/healthz: Get "https://192.168.120.172:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0409 01:13:34.152209    7488 api_server.go:253] Checking apiserver healthz at https://192.168.120.172:8443/healthz ...
	I0409 01:13:39.153458    7488 api_server.go:269] stopped: https://192.168.120.172:8443/healthz: Get "https://192.168.120.172:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0409 01:13:39.153458    7488 api_server.go:253] Checking apiserver healthz at https://192.168.120.172:8443/healthz ...
	I0409 01:13:44.153934    7488 api_server.go:269] stopped: https://192.168.120.172:8443/healthz: Get "https://192.168.120.172:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0409 01:13:44.153934    7488 api_server.go:253] Checking apiserver healthz at https://192.168.120.172:8443/healthz ...
	I0409 01:13:49.155160    7488 api_server.go:269] stopped: https://192.168.120.172:8443/healthz: Get "https://192.168.120.172:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0409 01:13:49.155160    7488 api_server.go:253] Checking apiserver healthz at https://192.168.120.172:8443/healthz ...
	I0409 01:13:50.194214    7488 api_server.go:269] stopped: https://192.168.120.172:8443/healthz: Get "https://192.168.120.172:8443/healthz": read tcp 192.168.112.1:55979->192.168.120.172:8443: wsarecv: An existing connection was forcibly closed by the remote host.
	I0409 01:13:50.194269    7488 api_server.go:253] Checking apiserver healthz at https://192.168.120.172:8443/healthz ...
	I0409 01:13:55.195174    7488 api_server.go:269] stopped: https://192.168.120.172:8443/healthz: Get "https://192.168.120.172:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0409 01:13:55.195174    7488 api_server.go:253] Checking apiserver healthz at https://192.168.120.172:8443/healthz ...
	I0409 01:14:00.196759    7488 api_server.go:269] stopped: https://192.168.120.172:8443/healthz: Get "https://192.168.120.172:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0409 01:14:00.196759    7488 api_server.go:253] Checking apiserver healthz at https://192.168.120.172:8443/healthz ...
	I0409 01:14:05.197876    7488 api_server.go:269] stopped: https://192.168.120.172:8443/healthz: Get "https://192.168.120.172:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0409 01:14:05.197876    7488 api_server.go:253] Checking apiserver healthz at https://192.168.120.172:8443/healthz ...
	I0409 01:14:09.090272    7488 api_server.go:279] https://192.168.120.172:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0409 01:14:09.090383    7488 api_server.go:103] status: https://192.168.120.172:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0409 01:14:09.090462    7488 api_server.go:253] Checking apiserver healthz at https://192.168.120.172:8443/healthz ...
	I0409 01:14:09.185554    7488 api_server.go:279] https://192.168.120.172:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0409 01:14:09.185554    7488 api_server.go:103] status: https://192.168.120.172:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0409 01:14:09.185636    7488 api_server.go:253] Checking apiserver healthz at https://192.168.120.172:8443/healthz ...
	I0409 01:14:09.207753    7488 api_server.go:279] https://192.168.120.172:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0409 01:14:09.208340    7488 api_server.go:103] status: https://192.168.120.172:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0409 01:14:09.652177    7488 api_server.go:253] Checking apiserver healthz at https://192.168.120.172:8443/healthz ...
	I0409 01:14:09.660224    7488 api_server.go:279] https://192.168.120.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0409 01:14:09.660467    7488 api_server.go:103] status: https://192.168.120.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0409 01:14:10.152952    7488 api_server.go:253] Checking apiserver healthz at https://192.168.120.172:8443/healthz ...
	I0409 01:14:10.159947    7488 api_server.go:279] https://192.168.120.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0409 01:14:10.159947    7488 api_server.go:103] status: https://192.168.120.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0409 01:14:10.653548    7488 api_server.go:253] Checking apiserver healthz at https://192.168.120.172:8443/healthz ...
	I0409 01:14:10.663178    7488 api_server.go:279] https://192.168.120.172:8443/healthz returned 200:
	ok
	I0409 01:14:10.663392    7488 discovery_client.go:658] "Request Body" body=""
	I0409 01:14:10.663573    7488 round_trippers.go:470] GET https://192.168.120.172:8443/version
	I0409 01:14:10.663573    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:10.663573    7488 round_trippers.go:480]     Accept: application/json, */*
	I0409 01:14:10.663630    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:10.673416    7488 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0409 01:14:10.673472    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:10.673472    7488 round_trippers.go:587]     Content-Length: 263
	I0409 01:14:10.673472    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:10 GMT
	I0409 01:14:10.673521    7488 round_trippers.go:587]     Audit-Id: ba911d4c-d0f4-4ad7-a64c-f8dc032553cf
	I0409 01:14:10.673521    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:10.673521    7488 round_trippers.go:587]     Content-Type: application/json
	I0409 01:14:10.673521    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:10.673521    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:10.673521    7488 discovery_client.go:658] "Response Body" body=<
		{
		  "major": "1",
		  "minor": "32",
		  "gitVersion": "v1.32.2",
		  "gitCommit": "67a30c0adcf52bd3f56ff0893ce19966be12991f",
		  "gitTreeState": "clean",
		  "buildDate": "2025-02-12T21:19:47Z",
		  "goVersion": "go1.23.6",
		  "compiler": "gc",
		  "platform": "linux/amd64"
		}
	 >
	I0409 01:14:10.673521    7488 api_server.go:141] control plane version: v1.32.2
	I0409 01:14:10.673521    7488 api_server.go:131] duration metric: took 41.5216554s to wait for apiserver health ...
	I0409 01:14:10.673521    7488 cni.go:84] Creating CNI manager for ""
	I0409 01:14:10.673521    7488 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0409 01:14:10.676855    7488 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0409 01:14:10.691280    7488 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0409 01:14:10.698786    7488 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0409 01:14:10.698786    7488 command_runner.go:130] >   Size: 3103192   	Blocks: 6064       IO Block: 4096   regular file
	I0409 01:14:10.698786    7488 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0409 01:14:10.698786    7488 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0409 01:14:10.698976    7488 command_runner.go:130] > Access: 2025-04-09 01:12:01.071156800 +0000
	I0409 01:14:10.698976    7488 command_runner.go:130] > Modify: 2025-01-14 09:03:58.000000000 +0000
	I0409 01:14:10.698976    7488 command_runner.go:130] > Change: 2025-04-09 01:11:49.988000000 +0000
	I0409 01:14:10.698976    7488 command_runner.go:130] >  Birth: -
	I0409 01:14:10.699113    7488 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0409 01:14:10.699113    7488 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0409 01:14:10.751766    7488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0409 01:14:11.530995    7488 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0409 01:14:11.531078    7488 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0409 01:14:11.531078    7488 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0409 01:14:11.531078    7488 command_runner.go:130] > daemonset.apps/kindnet configured
	I0409 01:14:11.531688    7488 system_pods.go:43] waiting for kube-system pods to appear ...
	I0409 01:14:11.531791    7488 type.go:204] "Request Body" body=""
	I0409 01:14:11.531791    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods
	I0409 01:14:11.531791    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:11.531791    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:11.531791    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:11.539009    7488 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0409 01:14:11.539009    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:11.539009    7488 round_trippers.go:587]     Audit-Id: 80344bf7-1cc7-406c-a82c-34d5902f9085
	I0409 01:14:11.539009    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:11.539009    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:11.539009    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:11.539009    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:11.539009    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:11 GMT
	I0409 01:14:11.542208    7488 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 9f e3 03 0a  0a 0a 00 12 04 31 38 32  |ist..........182|
		00000020  39 1a 00 12 d4 27 0a ae  19 0a 18 63 6f 72 65 64  |9....'.....cored|
		00000030  6e 73 2d 36 36 38 64 36  62 66 39 62 63 2d 64 35  |ns-668d6bf9bc-d5|
		00000040  34 73 34 12 13 63 6f 72  65 64 6e 73 2d 36 36 38  |4s4..coredns-668|
		00000050  64 36 62 66 39 62 63 2d  1a 0b 6b 75 62 65 2d 73  |d6bf9bc-..kube-s|
		00000060  79 73 74 65 6d 22 00 2a  24 31 32 34 33 31 66 32  |ystem".*$12431f2|
		00000070  37 2d 37 65 34 65 2d 34  31 63 39 2d 38 64 35 34  |7-7e4e-41c9-8d54|
		00000080  2d 62 63 37 62 65 32 30  37 34 62 39 63 32 03 34  |-bc7be2074b9c2.4|
		00000090  33 36 38 00 42 08 08 96  88 d7 bf 06 10 00 5a 13  |368.B.........Z.|
		000000a0  0a 07 6b 38 73 2d 61 70  70 12 08 6b 75 62 65 2d  |..k8s-app..kube-|
		000000b0  64 6e 73 5a 1f 0a 11 70  6f 64 2d 74 65 6d 70 6c  |dnsZ...pod-templ|
		000000c0  61 74 65 2d 68 61 73 68  12 0a 36 36 38 64 36 62  |ate-hash..668d6 [truncated 304542 chars]
	 >
	I0409 01:14:11.543163    7488 system_pods.go:59] 12 kube-system pods found
	I0409 01:14:11.543163    7488 system_pods.go:61] "coredns-668d6bf9bc-d54s4" [12431f27-7e4e-41c9-8d54-bc7be2074b9c] Running
	I0409 01:14:11.543163    7488 system_pods.go:61] "etcd-multinode-611500" [622d9aaa-1f2f-435c-8cea-b53badba27f4] Running
	I0409 01:14:11.543163    7488 system_pods.go:61] "kindnet-66fr6" [3127adff-6b68-4ae6-8fea-cbee940bb9df] Running
	I0409 01:14:11.543163    7488 system_pods.go:61] "kindnet-v66j5" [9200b124-3c4b-442b-99fd-49ccc2faf534] Running
	I0409 01:14:11.543163    7488 system_pods.go:61] "kindnet-vntlr" [2e088361-08c9-4325-8241-20f5f443dcf6] Running
	I0409 01:14:11.543163    7488 system_pods.go:61] "kube-apiserver-multinode-611500" [50196775-bc0c-41c1-b36c-193695d2db23] Running
	I0409 01:14:11.543163    7488 system_pods.go:61] "kube-controller-manager-multinode-611500" [75af0b90-6c72-4624-8660-aa943fec9606] Running
	I0409 01:14:11.543163    7488 system_pods.go:61] "kube-proxy-bhjnx" [afb6da99-de99-49c4-b080-8500b4b08d9b] Running
	I0409 01:14:11.543163    7488 system_pods.go:61] "kube-proxy-xnh8p" [ed8e944e-e73d-444c-b1ee-d7155c771c96] Running
	I0409 01:14:11.543163    7488 system_pods.go:61] "kube-proxy-zxxgf" [3506eee7-d946-4dde-91c9-9fc5c1474434] Running
	I0409 01:14:11.543163    7488 system_pods.go:61] "kube-scheduler-multinode-611500" [9185d5c0-b28a-438c-b05a-64667e4ac3d7] Running
	I0409 01:14:11.543163    7488 system_pods.go:61] "storage-provisioner" [8f7ea37f-c3a7-44fc-ac99-c184b674aca3] Running
	I0409 01:14:11.543163    7488 system_pods.go:74] duration metric: took 11.372ms to wait for pod list to return data ...
	I0409 01:14:11.543163    7488 node_conditions.go:102] verifying NodePressure condition ...
	I0409 01:14:11.543163    7488 type.go:204] "Request Body" body=""
	I0409 01:14:11.543163    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes
	I0409 01:14:11.543163    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:11.543163    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:11.543163    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:11.547686    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:11.547686    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:11.547686    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:11 GMT
	I0409 01:14:11.547686    7488 round_trippers.go:587]     Audit-Id: 74695158-5ffa-4e89-8f7a-9977280a9f2e
	I0409 01:14:11.547686    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:11.547686    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:11.547686    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:11.547686    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:11.547686    7488 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0e 0a 02  76 31 12 08 4e 6f 64 65  |k8s.....v1..Node|
		00000010  4c 69 73 74 12 86 5d 0a  0a 0a 00 12 04 31 38 32  |List..]......182|
		00000020  39 1a 00 12 e8 23 0a 8b  11 0a 10 6d 75 6c 74 69  |9....#.....multi|
		00000030  6e 6f 64 65 2d 36 31 31  35 30 30 12 00 1a 00 22  |node-611500...."|
		00000040  00 2a 24 62 31 32 35 32  66 34 61 2d 32 32 33 30  |.*$b1252f4a-2230|
		00000050  2d 34 36 61 36 2d 39 33  38 62 2d 37 63 30 37 31  |-46a6-938b-7c071|
		00000060  31 31 33 33 34 32 34 32  04 31 36 33 31 38 00 42  |11334242.16318.B|
		00000070  08 08 8d 88 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000080  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000090  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		000000a0  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000b0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000c0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/ar [truncated 57974 chars]
	 >
	I0409 01:14:11.548672    7488 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0409 01:14:11.548672    7488 node_conditions.go:123] node cpu capacity is 2
	I0409 01:14:11.548672    7488 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0409 01:14:11.548672    7488 node_conditions.go:123] node cpu capacity is 2
	I0409 01:14:11.548672    7488 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0409 01:14:11.548672    7488 node_conditions.go:123] node cpu capacity is 2
	I0409 01:14:11.548672    7488 node_conditions.go:105] duration metric: took 5.5092ms to run NodePressure ...
	I0409 01:14:11.548672    7488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0409 01:14:11.863835    7488 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0409 01:14:11.863962    7488 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0409 01:14:11.864027    7488 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0409 01:14:11.864140    7488 type.go:204] "Request Body" body=""
	I0409 01:14:11.864140    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0409 01:14:11.864140    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:11.864140    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:11.864140    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:11.868765    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:11.868823    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:11.868875    7488 round_trippers.go:587]     Audit-Id: 2c1eff26-f41b-4508-b7ee-0c1bf6b30f0c
	I0409 01:14:11.868912    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:11.868912    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:11.868912    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:11.868912    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:11.868912    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:11 GMT
	I0409 01:14:11.869624    7488 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 dc b3 01 0a  0a 0a 00 12 04 31 38 33  |ist..........183|
		00000020  31 1a 00 12 b6 2b 0a a0  1a 0a 15 65 74 63 64 2d  |1....+.....etcd-|
		00000030  6d 75 6c 74 69 6e 6f 64  65 2d 36 31 31 35 30 30  |multinode-611500|
		00000040  12 00 1a 0b 6b 75 62 65  2d 73 79 73 74 65 6d 22  |....kube-system"|
		00000050  00 2a 24 36 32 32 64 39  61 61 61 2d 31 66 32 66  |.*$622d9aaa-1f2f|
		00000060  2d 34 33 35 63 2d 38 63  65 61 2d 62 35 33 62 61  |-435c-8cea-b53ba|
		00000070  64 62 61 32 37 66 34 32  03 33 39 35 38 00 42 08  |dba27f42.3958.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 11 0a 09 63 6f 6d 70  |........Z...comp|
		00000090  6f 6e 65 6e 74 12 04 65  74 63 64 5a 15 0a 04 74  |onent..etcdZ...t|
		000000a0  69 65 72 12 0d 63 6f 6e  74 72 6f 6c 2d 70 6c 61  |ier..control-pla|
		000000b0  6e 65 62 50 0a 30 6b 75  62 65 61 64 6d 2e 6b 75  |nebP.0kubeadm.ku|
		000000c0  62 65 72 6e 65 74 65 73  2e 69 6f 2f 65 74 63 64  |bernetes.io/etc [truncated 112727 chars]
	 >
	I0409 01:14:11.870402    7488 retry.go:31] will retry after 263.697513ms: kubelet not initialised
	I0409 01:14:12.135287    7488 type.go:204] "Request Body" body=""
	I0409 01:14:12.135287    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0409 01:14:12.135287    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:12.135287    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:12.135287    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:12.139238    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:12.139238    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:12.139298    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:12.139298    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:12.139298    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:12.139298    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:12.139298    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:12 GMT
	I0409 01:14:12.139298    7488 round_trippers.go:587]     Audit-Id: a44e476b-8e12-4bec-83da-aa8cf1a76fd8
	I0409 01:14:12.140586    7488 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 dc b3 01 0a  0a 0a 00 12 04 31 38 33  |ist..........183|
		00000020  31 1a 00 12 b6 2b 0a a0  1a 0a 15 65 74 63 64 2d  |1....+.....etcd-|
		00000030  6d 75 6c 74 69 6e 6f 64  65 2d 36 31 31 35 30 30  |multinode-611500|
		00000040  12 00 1a 0b 6b 75 62 65  2d 73 79 73 74 65 6d 22  |....kube-system"|
		00000050  00 2a 24 36 32 32 64 39  61 61 61 2d 31 66 32 66  |.*$622d9aaa-1f2f|
		00000060  2d 34 33 35 63 2d 38 63  65 61 2d 62 35 33 62 61  |-435c-8cea-b53ba|
		00000070  64 62 61 32 37 66 34 32  03 33 39 35 38 00 42 08  |dba27f42.3958.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 11 0a 09 63 6f 6d 70  |........Z...comp|
		00000090  6f 6e 65 6e 74 12 04 65  74 63 64 5a 15 0a 04 74  |onent..etcdZ...t|
		000000a0  69 65 72 12 0d 63 6f 6e  74 72 6f 6c 2d 70 6c 61  |ier..control-pla|
		000000b0  6e 65 62 50 0a 30 6b 75  62 65 61 64 6d 2e 6b 75  |nebP.0kubeadm.ku|
		000000c0  62 65 72 6e 65 74 65 73  2e 69 6f 2f 65 74 63 64  |bernetes.io/etc [truncated 112727 chars]
	 >
	I0409 01:14:12.141163    7488 retry.go:31] will retry after 343.106119ms: kubelet not initialised
	I0409 01:14:12.484664    7488 type.go:204] "Request Body" body=""
	I0409 01:14:12.484664    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0409 01:14:12.484664    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:12.484664    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:12.484664    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:12.490019    7488 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0409 01:14:12.490114    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:12.490114    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:12.490114    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:12.490114    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:12 GMT
	I0409 01:14:12.490114    7488 round_trippers.go:587]     Audit-Id: 8670cb0f-e38d-4cfe-8caf-481332afbb66
	I0409 01:14:12.490114    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:12.490114    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:12.491469    7488 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 dc b3 01 0a  0a 0a 00 12 04 31 38 33  |ist..........183|
		00000020  31 1a 00 12 b6 2b 0a a0  1a 0a 15 65 74 63 64 2d  |1....+.....etcd-|
		00000030  6d 75 6c 74 69 6e 6f 64  65 2d 36 31 31 35 30 30  |multinode-611500|
		00000040  12 00 1a 0b 6b 75 62 65  2d 73 79 73 74 65 6d 22  |....kube-system"|
		00000050  00 2a 24 36 32 32 64 39  61 61 61 2d 31 66 32 66  |.*$622d9aaa-1f2f|
		00000060  2d 34 33 35 63 2d 38 63  65 61 2d 62 35 33 62 61  |-435c-8cea-b53ba|
		00000070  64 62 61 32 37 66 34 32  03 33 39 35 38 00 42 08  |dba27f42.3958.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 11 0a 09 63 6f 6d 70  |........Z...comp|
		00000090  6f 6e 65 6e 74 12 04 65  74 63 64 5a 15 0a 04 74  |onent..etcdZ...t|
		000000a0  69 65 72 12 0d 63 6f 6e  74 72 6f 6c 2d 70 6c 61  |ier..control-pla|
		000000b0  6e 65 62 50 0a 30 6b 75  62 65 61 64 6d 2e 6b 75  |nebP.0kubeadm.ku|
		000000c0  62 65 72 6e 65 74 65 73  2e 69 6f 2f 65 74 63 64  |bernetes.io/etc [truncated 112727 chars]
	 >
	I0409 01:14:12.491896    7488 retry.go:31] will retry after 840.109319ms: kubelet not initialised
	I0409 01:14:13.332253    7488 type.go:204] "Request Body" body=""
	I0409 01:14:13.332253    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0409 01:14:13.332253    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:13.332253    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:13.332253    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:13.336668    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:13.336668    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:13.336668    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:13 GMT
	I0409 01:14:13.336668    7488 round_trippers.go:587]     Audit-Id: f5b3bf4b-a59e-4f77-9605-2bcb2dca0741
	I0409 01:14:13.337648    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:13.337648    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:13.337648    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:13.337648    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:13.338765    7488 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 dc b3 01 0a  0a 0a 00 12 04 31 38 33  |ist..........183|
		00000020  31 1a 00 12 b6 2b 0a a0  1a 0a 15 65 74 63 64 2d  |1....+.....etcd-|
		00000030  6d 75 6c 74 69 6e 6f 64  65 2d 36 31 31 35 30 30  |multinode-611500|
		00000040  12 00 1a 0b 6b 75 62 65  2d 73 79 73 74 65 6d 22  |....kube-system"|
		00000050  00 2a 24 36 32 32 64 39  61 61 61 2d 31 66 32 66  |.*$622d9aaa-1f2f|
		00000060  2d 34 33 35 63 2d 38 63  65 61 2d 62 35 33 62 61  |-435c-8cea-b53ba|
		00000070  64 62 61 32 37 66 34 32  03 33 39 35 38 00 42 08  |dba27f42.3958.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 11 0a 09 63 6f 6d 70  |........Z...comp|
		00000090  6f 6e 65 6e 74 12 04 65  74 63 64 5a 15 0a 04 74  |onent..etcdZ...t|
		000000a0  69 65 72 12 0d 63 6f 6e  74 72 6f 6c 2d 70 6c 61  |ier..control-pla|
		000000b0  6e 65 62 50 0a 30 6b 75  62 65 61 64 6d 2e 6b 75  |nebP.0kubeadm.ku|
		000000c0  62 65 72 6e 65 74 65 73  2e 69 6f 2f 65 74 63 64  |bernetes.io/etc [truncated 112727 chars]
	 >
	I0409 01:14:13.338815    7488 retry.go:31] will retry after 1.042076456s: kubelet not initialised
	I0409 01:14:14.381819    7488 type.go:204] "Request Body" body=""
	I0409 01:14:14.381819    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0409 01:14:14.381819    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:14.381819    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:14.381819    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:14.393247    7488 round_trippers.go:581] Response Status: 200 OK in 11 milliseconds
	I0409 01:14:14.393247    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:14.393247    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:14.393247    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:14.393247    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:14.393247    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:14 GMT
	I0409 01:14:14.393247    7488 round_trippers.go:587]     Audit-Id: b1f30868-724c-41b1-961f-e4d5661b2d66
	I0409 01:14:14.393247    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:14.394204    7488 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 e3 a2 01 0a  0a 0a 00 12 04 31 38 35  |ist..........185|
		00000020  36 1a 00 12 dc 20 0a d5  13 0a 15 65 74 63 64 2d  |6.... .....etcd-|
		00000030  6d 75 6c 74 69 6e 6f 64  65 2d 36 31 31 35 30 30  |multinode-611500|
		00000040  12 00 1a 0b 6b 75 62 65  2d 73 79 73 74 65 6d 22  |....kube-system"|
		00000050  00 2a 24 65 36 62 33 39  62 31 61 2d 61 36 64 35  |.*$e6b39b1a-a6d5|
		00000060  2d 34 36 64 31 2d 61 35  36 61 2d 32 34 33 63 39  |-46d1-a56a-243c9|
		00000070  62 62 36 66 35 36 33 32  04 31 38 34 35 38 00 42  |bb6f5632.18458.B|
		00000080  08 08 e6 93 d7 bf 06 10  00 5a 11 0a 09 63 6f 6d  |.........Z...com|
		00000090  70 6f 6e 65 6e 74 12 04  65 74 63 64 5a 15 0a 04  |ponent..etcdZ...|
		000000a0  74 69 65 72 12 0d 63 6f  6e 74 72 6f 6c 2d 70 6c  |tier..control-pl|
		000000b0  61 6e 65 62 50 0a 30 6b  75 62 65 61 64 6d 2e 6b  |anebP.0kubeadm.k|
		000000c0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 65 74 63  |ubernetes.io/et [truncated 101990 chars]
	 >
	I0409 01:14:14.394870    7488 kubeadm.go:739] kubelet initialised
	I0409 01:14:14.394870    7488 kubeadm.go:740] duration metric: took 2.5307302s waiting for restarted kubelet to initialise ...
	I0409 01:14:14.394935    7488 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0409 01:14:14.395030    7488 type.go:204] "Request Body" body=""
	I0409 01:14:14.395056    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods
	I0409 01:14:14.395056    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:14.395056    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:14.395056    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:14.406740    7488 round_trippers.go:581] Response Status: 200 OK in 11 milliseconds
	I0409 01:14:14.406740    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:14.406740    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:14.406740    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:14.406740    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:14.406740    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:14.406740    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:14 GMT
	I0409 01:14:14.406740    7488 round_trippers.go:587]     Audit-Id: 80dfbf8a-d183-457a-bdcc-c4736198db4c
	I0409 01:14:14.408635    7488 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 c4 d4 03 0a  0a 0a 00 12 04 31 38 35  |ist..........185|
		00000020  37 1a 00 12 d4 27 0a ae  19 0a 18 63 6f 72 65 64  |7....'.....cored|
		00000030  6e 73 2d 36 36 38 64 36  62 66 39 62 63 2d 64 35  |ns-668d6bf9bc-d5|
		00000040  34 73 34 12 13 63 6f 72  65 64 6e 73 2d 36 36 38  |4s4..coredns-668|
		00000050  64 36 62 66 39 62 63 2d  1a 0b 6b 75 62 65 2d 73  |d6bf9bc-..kube-s|
		00000060  79 73 74 65 6d 22 00 2a  24 31 32 34 33 31 66 32  |ystem".*$12431f2|
		00000070  37 2d 37 65 34 65 2d 34  31 63 39 2d 38 64 35 34  |7-7e4e-41c9-8d54|
		00000080  2d 62 63 37 62 65 32 30  37 34 62 39 63 32 03 34  |-bc7be2074b9c2.4|
		00000090  33 36 38 00 42 08 08 96  88 d7 bf 06 10 00 5a 13  |368.B.........Z.|
		000000a0  0a 07 6b 38 73 2d 61 70  70 12 08 6b 75 62 65 2d  |..k8s-app..kube-|
		000000b0  64 6e 73 5a 1f 0a 11 70  6f 64 2d 74 65 6d 70 6c  |dnsZ...pod-templ|
		000000c0  61 74 65 2d 68 61 73 68  12 0a 36 36 38 64 36 62  |ate-hash..668d6 [truncated 295225 chars]
	 >
	I0409 01:14:14.409631    7488 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-d54s4" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:14.409631    7488 type.go:168] "Request Body" body=""
	I0409 01:14:14.409631    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-d54s4
	I0409 01:14:14.409631    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:14.409631    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:14.409631    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:14.415014    7488 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0409 01:14:14.415109    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:14.415109    7488 round_trippers.go:587]     Audit-Id: e44d81c6-f5a6-40d1-9812-2172e63ebd4e
	I0409 01:14:14.415109    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:14.415109    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:14.415109    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:14.415109    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:14.415109    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:14 GMT
	I0409 01:14:14.415789    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  d4 27 0a ae 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.'.....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 64 35 34 73 34 12  |68d6bf9bc-d54s4.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 31 32 34  33 31 66 32 37 2d 37 65  |m".*$12431f27-7e|
		00000060  34 65 2d 34 31 63 39 2d  38 64 35 34 2d 62 63 37  |4e-41c9-8d54-bc7|
		00000070  62 65 32 30 37 34 62 39  63 32 03 34 33 36 38 00  |be2074b9c2.4368.|
		00000080  42 08 08 96 88 d7 bf 06  10 00 5a 13 0a 07 6b 38  |B.........Z...k8|
		00000090  73 2d 61 70 70 12 08 6b  75 62 65 2d 64 6e 73 5a  |s-app..kube-dnsZ|
		000000a0  1f 0a 11 70 6f 64 2d 74  65 6d 70 6c 61 74 65 2d  |...pod-template-|
		000000b0  68 61 73 68 12 0a 36 36  38 64 36 62 66 39 62 63  |hash..668d6bf9bc|
		000000c0  6a 53 0a 0a 52 65 70 6c  69 63 61 53 65 74 1a 12  |jS..ReplicaSet. [truncated 24171 chars]
	 >
	I0409 01:14:14.416164    7488 type.go:168] "Request Body" body=""
	I0409 01:14:14.416223    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:14.416288    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:14.416308    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:14.416308    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:14.426390    7488 round_trippers.go:581] Response Status: 200 OK in 10 milliseconds
	I0409 01:14:14.426526    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:14.426526    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:14.426526    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:14 GMT
	I0409 01:14:14.426526    7488 round_trippers.go:587]     Audit-Id: f38b70b1-d268-4582-8b21-9ab6d1c8b264
	I0409 01:14:14.426585    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:14.426585    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:14.426585    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:14.426585    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:14.426585    7488 pod_ready.go:98] node "multinode-611500" hosting pod "coredns-668d6bf9bc-d54s4" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-611500" has status "Ready":"False"
	I0409 01:14:14.427143    7488 pod_ready.go:82] duration metric: took 17.5117ms for pod "coredns-668d6bf9bc-d54s4" in "kube-system" namespace to be "Ready" ...
	E0409 01:14:14.427221    7488 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-611500" hosting pod "coredns-668d6bf9bc-d54s4" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-611500" has status "Ready":"False"
	I0409 01:14:14.427221    7488 pod_ready.go:79] waiting up to 4m0s for pod "etcd-multinode-611500" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:14.427221    7488 type.go:168] "Request Body" body=""
	I0409 01:14:14.427319    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-611500
	I0409 01:14:14.427319    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:14.427385    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:14.427385    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:14.438634    7488 round_trippers.go:581] Response Status: 200 OK in 11 milliseconds
	I0409 01:14:14.438634    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:14.438725    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:14.438725    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:14.438725    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:14.438725    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:14.438725    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:14 GMT
	I0409 01:14:14.438725    7488 round_trippers.go:587]     Audit-Id: 9ea8e671-aedc-4698-a292-d36065631723
	I0409 01:14:14.440001    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  dc 20 0a d5 13 0a 15 65  74 63 64 2d 6d 75 6c 74  |. .....etcd-mult|
		00000020  69 6e 6f 64 65 2d 36 31  31 35 30 30 12 00 1a 0b  |inode-611500....|
		00000030  6b 75 62 65 2d 73 79 73  74 65 6d 22 00 2a 24 65  |kube-system".*$e|
		00000040  36 62 33 39 62 31 61 2d  61 36 64 35 2d 34 36 64  |6b39b1a-a6d5-46d|
		00000050  31 2d 61 35 36 61 2d 32  34 33 63 39 62 62 36 66  |1-a56a-243c9bb6f|
		00000060  35 36 33 32 04 31 38 34  35 38 00 42 08 08 e6 93  |5632.18458.B....|
		00000070  d7 bf 06 10 00 5a 11 0a  09 63 6f 6d 70 6f 6e 65  |.....Z...compone|
		00000080  6e 74 12 04 65 74 63 64  5a 15 0a 04 74 69 65 72  |nt..etcdZ...tier|
		00000090  12 0d 63 6f 6e 74 72 6f  6c 2d 70 6c 61 6e 65 62  |..control-planeb|
		000000a0  50 0a 30 6b 75 62 65 61  64 6d 2e 6b 75 62 65 72  |P.0kubeadm.kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 65 74 63 64 2e 61 64  |netes.io/etcd.ad|
		000000c0  76 65 72 74 69 73 65 2d  63 6c 69 65 6e 74 2d 75  |vertise-client- [truncated 19818 chars]
	 >
	I0409 01:14:14.440245    7488 type.go:168] "Request Body" body=""
	I0409 01:14:14.440331    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:14.440355    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:14.440355    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:14.440355    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:14.445264    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:14.445264    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:14.446267    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:14.446267    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:14.446267    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:14.446267    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:14.446267    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:14 GMT
	I0409 01:14:14.446267    7488 round_trippers.go:587]     Audit-Id: 565ebd19-beaa-4aa4-acf5-1695abbe0ff6
	I0409 01:14:14.447266    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:14.447266    7488 pod_ready.go:98] node "multinode-611500" hosting pod "etcd-multinode-611500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-611500" has status "Ready":"False"
	I0409 01:14:14.447266    7488 pod_ready.go:82] duration metric: took 20.045ms for pod "etcd-multinode-611500" in "kube-system" namespace to be "Ready" ...
	E0409 01:14:14.447266    7488 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-611500" hosting pod "etcd-multinode-611500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-611500" has status "Ready":"False"
	I0409 01:14:14.447266    7488 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-multinode-611500" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:14.448291    7488 type.go:168] "Request Body" body=""
	I0409 01:14:14.448291    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-611500
	I0409 01:14:14.448291    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:14.448291    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:14.448291    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:14.457260    7488 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0409 01:14:14.457260    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:14.457260    7488 round_trippers.go:587]     Audit-Id: 8591cffe-4d7e-4de9-990e-1a48388c13b4
	I0409 01:14:14.457260    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:14.457260    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:14.457260    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:14.457260    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:14.457260    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:14 GMT
	I0409 01:14:14.458262    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  a2 29 0a e5 15 0a 1f 6b  75 62 65 2d 61 70 69 73  |.).....kube-apis|
		00000020  65 72 76 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |erver-multinode-|
		00000030  36 31 31 35 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |611500....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 66 39 39 32 34 37 35  |ystem".*$f992475|
		00000050  34 2d 66 38 63 35 2d 34  61 38 62 2d 39 64 61 32  |4-f8c5-4a8b-9da2|
		00000060  2d 32 33 64 38 30 39 36  61 35 65 63 66 32 04 31  |-23d8096a5ecf2.1|
		00000070  38 34 33 38 00 42 08 08  e6 93 d7 bf 06 10 00 5a  |8438.B.........Z|
		00000080  1b 0a 09 63 6f 6d 70 6f  6e 65 6e 74 12 0e 6b 75  |...component..ku|
		00000090  62 65 2d 61 70 69 73 65  72 76 65 72 5a 15 0a 04  |be-apiserverZ...|
		000000a0  74 69 65 72 12 0d 63 6f  6e 74 72 6f 6c 2d 70 6c  |tier..control-pl|
		000000b0  61 6e 65 62 57 0a 3f 6b  75 62 65 61 64 6d 2e 6b  |anebW.?kubeadm.k|
		000000c0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 6b 75 62  |ubernetes.io/ku [truncated 25196 chars]
	 >
	I0409 01:14:14.458262    7488 type.go:168] "Request Body" body=""
	I0409 01:14:14.458262    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:14.458262    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:14.458262    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:14.458262    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:14.476279    7488 round_trippers.go:581] Response Status: 200 OK in 18 milliseconds
	I0409 01:14:14.476467    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:14.476467    7488 round_trippers.go:587]     Audit-Id: 9fcee71f-bea5-4797-a073-be0260c50827
	I0409 01:14:14.476467    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:14.476467    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:14.476467    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:14.476467    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:14.476467    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:14 GMT
	I0409 01:14:14.478841    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:14.478963    7488 pod_ready.go:98] node "multinode-611500" hosting pod "kube-apiserver-multinode-611500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-611500" has status "Ready":"False"
	I0409 01:14:14.479116    7488 pod_ready.go:82] duration metric: took 31.8497ms for pod "kube-apiserver-multinode-611500" in "kube-system" namespace to be "Ready" ...
	E0409 01:14:14.479116    7488 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-611500" hosting pod "kube-apiserver-multinode-611500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-611500" has status "Ready":"False"
	I0409 01:14:14.479116    7488 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-multinode-611500" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:14.479116    7488 type.go:168] "Request Body" body=""
	I0409 01:14:14.479116    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:14.479116    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:14.479116    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:14.479348    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:14.483291    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:14.483705    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:14.483705    7488 round_trippers.go:587]     Audit-Id: 48cb31cc-eee3-42d8-92a6-2b4f229e1d67
	I0409 01:14:14.483705    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:14.483705    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:14.483705    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:14.483705    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:14.483705    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:14 GMT
	I0409 01:14:14.484147    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  de 34 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.4....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 38 35 36 38 00 42 08  |ec96062.18568.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 32460 chars]
	 >
	I0409 01:14:14.484421    7488 type.go:168] "Request Body" body=""
	I0409 01:14:14.484482    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:14.484482    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:14.484482    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:14.484482    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:14.507628    7488 round_trippers.go:581] Response Status: 200 OK in 22 milliseconds
	I0409 01:14:14.507628    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:14.507705    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:14.507705    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:14 GMT
	I0409 01:14:14.507705    7488 round_trippers.go:587]     Audit-Id: bac051f1-d33a-4c5f-9474-3a8604a40910
	I0409 01:14:14.507705    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:14.507705    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:14.507705    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:14.508324    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:14.508532    7488 pod_ready.go:98] node "multinode-611500" hosting pod "kube-controller-manager-multinode-611500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-611500" has status "Ready":"False"
	I0409 01:14:14.508569    7488 pod_ready.go:82] duration metric: took 29.4531ms for pod "kube-controller-manager-multinode-611500" in "kube-system" namespace to be "Ready" ...
	E0409 01:14:14.508569    7488 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-611500" hosting pod "kube-controller-manager-multinode-611500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-611500" has status "Ready":"False"
	I0409 01:14:14.508569    7488 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-bhjnx" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:14.508695    7488 type.go:168] "Request Body" body=""
	I0409 01:14:14.582287    7488 request.go:661] Waited for 73.5912ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bhjnx
	I0409 01:14:14.582287    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bhjnx
	I0409 01:14:14.582287    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:14.582287    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:14.582287    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:14.585309    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:14.586083    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:14.586083    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:14.586083    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:14 GMT
	I0409 01:14:14.586083    7488 round_trippers.go:587]     Audit-Id: c3d6e252-6b44-47bd-b0a6-53c87161488b
	I0409 01:14:14.586083    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:14.586168    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:14.586168    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:14.589099    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  af 25 0a c1 14 0a 10 6b  75 62 65 2d 70 72 6f 78  |.%.....kube-prox|
		00000020  79 2d 62 68 6a 6e 78 12  0b 6b 75 62 65 2d 70 72  |y-bhjnx..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 61 66 62  36 64 61 39 39 2d 64 65  |m".*$afb6da99-de|
		00000050  39 39 2d 34 39 63 34 2d  62 30 38 30 2d 38 35 30  |99-49c4-b080-850|
		00000060  30 62 34 62 30 38 64 39  62 32 03 36 32 35 38 00  |0b4b08d9b2.6258.|
		00000070  42 08 08 d1 89 d7 bf 06  10 00 5a 26 0a 18 63 6f  |B.........Z&..co|
		00000080  6e 74 72 6f 6c 6c 65 72  2d 72 65 76 69 73 69 6f  |ntroller-revisio|
		00000090  6e 2d 68 61 73 68 12 0a  37 62 62 38 34 63 34 39  |n-hash..7bb84c49|
		000000a0  38 34 5a 15 0a 07 6b 38  73 2d 61 70 70 12 0a 6b  |84Z...k8s-app..k|
		000000b0  75 62 65 2d 70 72 6f 78  79 5a 1c 0a 17 70 6f 64  |ube-proxyZ...pod|
		000000c0  2d 74 65 6d 70 6c 61 74  65 2d 67 65 6e 65 72 61  |-template-gener [truncated 22744 chars]
	 >
	I0409 01:14:14.589473    7488 type.go:168] "Request Body" body=""
	I0409 01:14:14.782118    7488 request.go:661] Waited for 192.6432ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.120.172:8443/api/v1/nodes/multinode-611500-m02
	I0409 01:14:14.782118    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500-m02
	I0409 01:14:14.782118    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:14.782118    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:14.782118    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:14.795223    7488 round_trippers.go:581] Response Status: 200 OK in 13 milliseconds
	I0409 01:14:14.795223    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:14.795308    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:14.795308    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:14.795308    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:14.795308    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:14.795308    7488 round_trippers.go:587]     Content-Length: 3466
	I0409 01:14:14.795308    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:14 GMT
	I0409 01:14:14.795308    7488 round_trippers.go:587]     Audit-Id: 39570308-c6d2-434b-ac7b-6ed1988bcc3b
	I0409 01:14:14.796124    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 f3 1a 0a b0 0f 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 04 31 37 37 34 38 00  |bd39faf32.17748.|
		00000060  42 08 08 d1 89 d7 bf 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 16113 chars]
	 >
	I0409 01:14:14.796124    7488 pod_ready.go:93] pod "kube-proxy-bhjnx" in "kube-system" namespace has status "Ready":"True"
	I0409 01:14:14.796124    7488 pod_ready.go:82] duration metric: took 287.551ms for pod "kube-proxy-bhjnx" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:14.796124    7488 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xnh8p" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:14.796124    7488 type.go:168] "Request Body" body=""
	I0409 01:14:14.983178    7488 request.go:661] Waited for 187.0516ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xnh8p
	I0409 01:14:14.983178    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xnh8p
	I0409 01:14:14.983178    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:14.983178    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:14.983178    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:14.988143    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:14.988289    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:14.988289    7488 round_trippers.go:587]     Audit-Id: 3341b43c-fefd-4d3f-9f64-df43e1c356b9
	I0409 01:14:14.988289    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:14.988289    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:14.988289    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:14.988289    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:14.988385    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:15 GMT
	I0409 01:14:14.988822    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b4 26 0a c5 15 0a 10 6b  75 62 65 2d 70 72 6f 78  |.&.....kube-prox|
		00000020  79 2d 78 6e 68 38 70 12  0b 6b 75 62 65 2d 70 72  |y-xnh8p..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 65 64 38  65 39 34 34 65 2d 65 37  |m".*$ed8e944e-e7|
		00000050  33 64 2d 34 34 34 63 2d  62 31 65 65 2d 64 37 31  |3d-444c-b1ee-d71|
		00000060  35 35 63 37 37 31 63 39  36 32 04 31 38 31 31 38  |55c771c962.18118|
		00000070  00 42 08 08 f5 8b d7 bf  06 10 00 5a 26 0a 18 63  |.B.........Z&..c|
		00000080  6f 6e 74 72 6f 6c 6c 65  72 2d 72 65 76 69 73 69  |ontroller-revisi|
		00000090  6f 6e 2d 68 61 73 68 12  0a 37 62 62 38 34 63 34  |on-hash..7bb84c4|
		000000a0  39 38 34 5a 15 0a 07 6b  38 73 2d 61 70 70 12 0a  |984Z...k8s-app..|
		000000b0  6b 75 62 65 2d 70 72 6f  78 79 5a 1c 0a 17 70 6f  |kube-proxyZ...po|
		000000c0  64 2d 74 65 6d 70 6c 61  74 65 2d 67 65 6e 65 72  |d-template-gene [truncated 23381 chars]
	 >
	I0409 01:14:14.988822    7488 type.go:168] "Request Body" body=""
	I0409 01:14:15.182398    7488 request.go:661] Waited for 193.5734ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.120.172:8443/api/v1/nodes/multinode-611500-m03
	I0409 01:14:15.182398    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500-m03
	I0409 01:14:15.182398    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:15.182398    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:15.182398    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:15.188224    7488 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0409 01:14:15.188224    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:15.188224    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:15.188224    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:15.188224    7488 round_trippers.go:587]     Content-Length: 3885
	I0409 01:14:15.188224    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:15 GMT
	I0409 01:14:15.188224    7488 round_trippers.go:587]     Audit-Id: a0529b4d-6e8f-4763-952a-9e4e34eed07a
	I0409 01:14:15.188224    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:15.188224    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:15.188785    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 96 1e 0a eb 12 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 33 12 00 1a 00  |e-611500-m03....|
		00000030  22 00 2a 24 38 63 66 33  37 34 64 36 2d 31 66 62  |".*$8cf374d6-1fb|
		00000040  30 2d 34 30 36 38 2d 39  62 66 39 2d 30 62 32 37  |0-4068-9bf9-0b27|
		00000050  61 34 32 61 63 66 34 39  32 04 31 38 31 38 38 00  |a42acf492.18188.|
		00000060  42 08 08 a0 91 d7 bf 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 18170 chars]
	 >
	I0409 01:14:15.188915    7488 pod_ready.go:98] node "multinode-611500-m03" hosting pod "kube-proxy-xnh8p" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-611500-m03" has status "Ready":"Unknown"
	I0409 01:14:15.189004    7488 pod_ready.go:82] duration metric: took 392.8751ms for pod "kube-proxy-xnh8p" in "kube-system" namespace to be "Ready" ...
	E0409 01:14:15.189091    7488 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-611500-m03" hosting pod "kube-proxy-xnh8p" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-611500-m03" has status "Ready":"Unknown"
	I0409 01:14:15.189091    7488 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-zxxgf" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:15.189158    7488 type.go:168] "Request Body" body=""
	I0409 01:14:15.382832    7488 request.go:661] Waited for 193.6059ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zxxgf
	I0409 01:14:15.382832    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zxxgf
	I0409 01:14:15.382832    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:15.382832    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:15.382832    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:15.389387    7488 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0409 01:14:15.389502    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:15.389502    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:15.389502    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:15.389502    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:15 GMT
	I0409 01:14:15.389502    7488 round_trippers.go:587]     Audit-Id: 24ac58f6-7edd-4ff2-98f8-4d7325262b04
	I0409 01:14:15.389502    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:15.389502    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:15.389954    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  c3 27 0a fc 14 0a 10 6b  75 62 65 2d 70 72 6f 78  |.'.....kube-prox|
		00000020  79 2d 7a 78 78 67 66 12  0b 6b 75 62 65 2d 70 72  |y-zxxgf..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 33 35 30  36 65 65 65 37 2d 64 39  |m".*$3506eee7-d9|
		00000050  34 36 2d 34 64 64 65 2d  39 31 63 39 2d 39 66 63  |46-4dde-91c9-9fc|
		00000060  35 63 31 34 37 34 34 33  34 32 04 31 38 36 32 38  |5c14744342.18628|
		00000070  00 42 08 08 96 88 d7 bf  06 10 00 5a 26 0a 18 63  |.B.........Z&..c|
		00000080  6f 6e 74 72 6f 6c 6c 65  72 2d 72 65 76 69 73 69  |ontroller-revisi|
		00000090  6f 6e 2d 68 61 73 68 12  0a 37 62 62 38 34 63 34  |on-hash..7bb84c4|
		000000a0  39 38 34 5a 15 0a 07 6b  38 73 2d 61 70 70 12 0a  |984Z...k8s-app..|
		000000b0  6b 75 62 65 2d 70 72 6f  78 79 5a 1c 0a 17 70 6f  |kube-proxyZ...po|
		000000c0  64 2d 74 65 6d 70 6c 61  74 65 2d 67 65 6e 65 72  |d-template-gene [truncated 24091 chars]
	 >
	I0409 01:14:15.390360    7488 type.go:168] "Request Body" body=""
	I0409 01:14:15.582580    7488 request.go:661] Waited for 192.2183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:15.582580    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:15.582580    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:15.582580    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:15.582580    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:15.587541    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:15.587607    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:15.587607    7488 round_trippers.go:587]     Audit-Id: 601845b6-2c1c-426f-a464-38e705f48b9f
	I0409 01:14:15.587607    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:15.587607    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:15.587607    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:15.587607    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:15.587607    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:15 GMT
	I0409 01:14:15.588141    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:15.588409    7488 pod_ready.go:98] node "multinode-611500" hosting pod "kube-proxy-zxxgf" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-611500" has status "Ready":"False"
	I0409 01:14:15.588409    7488 pod_ready.go:82] duration metric: took 399.3121ms for pod "kube-proxy-zxxgf" in "kube-system" namespace to be "Ready" ...
	E0409 01:14:15.588409    7488 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-611500" hosting pod "kube-proxy-zxxgf" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-611500" has status "Ready":"False"
	I0409 01:14:15.588409    7488 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-multinode-611500" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:15.588409    7488 type.go:168] "Request Body" body=""
	I0409 01:14:15.782632    7488 request.go:661] Waited for 194.2212ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-611500
	I0409 01:14:15.782632    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-611500
	I0409 01:14:15.782632    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:15.782632    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:15.782632    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:15.788010    7488 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0409 01:14:15.788081    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:15.788081    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:15.788081    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:15 GMT
	I0409 01:14:15.788139    7488 round_trippers.go:587]     Audit-Id: ed3e344e-fca2-47fd-8af9-9a3685b601cf
	I0409 01:14:15.788139    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:15.788139    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:15.788139    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:15.788139    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  ef 23 0a 84 18 0a 1f 6b  75 62 65 2d 73 63 68 65  |.#.....kube-sche|
		00000020  64 75 6c 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |duler-multinode-|
		00000030  36 31 31 35 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |611500....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 39 31 38 35 64 35 63  |ystem".*$9185d5c|
		00000050  30 2d 62 32 38 61 2d 34  33 38 63 2d 62 30 35 61  |0-b28a-438c-b05a|
		00000060  2d 36 34 36 36 37 65 34  61 63 33 64 37 32 04 31  |-64667e4ac3d72.1|
		00000070  38 35 33 38 00 42 08 08  90 88 d7 bf 06 10 00 5a  |8538.B.........Z|
		00000080  1b 0a 09 63 6f 6d 70 6f  6e 65 6e 74 12 0e 6b 75  |...component..ku|
		00000090  62 65 2d 73 63 68 65 64  75 6c 65 72 5a 15 0a 04  |be-schedulerZ...|
		000000a0  74 69 65 72 12 0d 63 6f  6e 74 72 6f 6c 2d 70 6c  |tier..control-pl|
		000000b0  61 6e 65 62 3d 0a 19 6b  75 62 65 72 6e 65 74 65  |aneb=..kubernete|
		000000c0  73 2e 69 6f 2f 63 6f 6e  66 69 67 2e 68 61 73 68  |s.io/config.has [truncated 21796 chars]
	 >
	I0409 01:14:15.788798    7488 type.go:168] "Request Body" body=""
	I0409 01:14:15.982404    7488 request.go:661] Waited for 193.6042ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:15.982404    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:15.982404    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:15.982404    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:15.982404    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:15.990714    7488 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0409 01:14:15.990794    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:15.990794    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:15.990794    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:15.990794    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:15.990794    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:16 GMT
	I0409 01:14:15.990794    7488 round_trippers.go:587]     Audit-Id: 20f9833e-db1a-41fb-aad3-d8cb4f7eb03a
	I0409 01:14:15.990794    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:15.991451    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:15.991599    7488 pod_ready.go:98] node "multinode-611500" hosting pod "kube-scheduler-multinode-611500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-611500" has status "Ready":"False"
	I0409 01:14:15.991599    7488 pod_ready.go:82] duration metric: took 403.1848ms for pod "kube-scheduler-multinode-611500" in "kube-system" namespace to be "Ready" ...
	E0409 01:14:15.991599    7488 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-611500" hosting pod "kube-scheduler-multinode-611500" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-611500" has status "Ready":"False"
	I0409 01:14:15.991599    7488 pod_ready.go:39] duration metric: took 1.5966436s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0409 01:14:15.991599    7488 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0409 01:14:16.010734    7488 command_runner.go:130] > -16
	I0409 01:14:16.011197    7488 ops.go:34] apiserver oom_adj: -16
	I0409 01:14:16.011197    7488 kubeadm.go:597] duration metric: took 51.0097751s to restartPrimaryControlPlane
	I0409 01:14:16.011197    7488 kubeadm.go:394] duration metric: took 51.0777842s to StartCluster
	I0409 01:14:16.011197    7488 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 01:14:16.011197    7488 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0409 01:14:16.013192    7488 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0409 01:14:16.014170    7488 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.120.172 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0409 01:14:16.014170    7488 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0409 01:14:16.015192    7488 config.go:182] Loaded profile config "multinode-611500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0409 01:14:16.022178    7488 out.go:177] * Verifying Kubernetes components...
	I0409 01:14:16.027500    7488 out.go:177] * Enabled addons: 
	I0409 01:14:16.036248    7488 addons.go:514] duration metric: took 22.0781ms for enable addons: enabled=[]
	I0409 01:14:16.047271    7488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0409 01:14:16.348939    7488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0409 01:14:16.375040    7488 node_ready.go:35] waiting up to 6m0s for node "multinode-611500" to be "Ready" ...
	I0409 01:14:16.375343    7488 type.go:168] "Request Body" body=""
	I0409 01:14:16.375483    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:16.375483    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:16.375483    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:16.375483    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:16.380130    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:16.380130    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:16.380130    7488 round_trippers.go:587]     Audit-Id: fc6ce43f-720a-4f39-bc1d-e97aadb432cc
	I0409 01:14:16.380130    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:16.380130    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:16.380130    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:16.380130    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:16.380130    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:16 GMT
	I0409 01:14:16.380130    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:16.876118    7488 type.go:168] "Request Body" body=""
	I0409 01:14:16.876118    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:16.876118    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:16.876118    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:16.876118    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:16.880600    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:16.880700    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:16.880899    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:16 GMT
	I0409 01:14:16.880899    7488 round_trippers.go:587]     Audit-Id: 81456f43-064f-4e92-8c70-89edd8e0cda5
	I0409 01:14:16.880899    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:16.880899    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:16.880899    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:16.880899    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:16.881327    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:17.376254    7488 type.go:168] "Request Body" body=""
	I0409 01:14:17.376254    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:17.376254    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:17.376254    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:17.376254    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:17.380638    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:17.380638    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:17.380638    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:17.380638    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:17 GMT
	I0409 01:14:17.380638    7488 round_trippers.go:587]     Audit-Id: e2cd0d90-1465-45bc-8aa1-1053d997219c
	I0409 01:14:17.380638    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:17.380638    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:17.380638    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:17.381302    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:17.876369    7488 type.go:168] "Request Body" body=""
	I0409 01:14:17.876417    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:17.876417    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:17.876417    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:17.876417    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:17.880422    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:17.880545    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:17.880545    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:17.880545    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:17.880623    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:17 GMT
	I0409 01:14:17.880623    7488 round_trippers.go:587]     Audit-Id: 1c66ac7a-9439-486e-8cb5-15eb0e3a4d54
	I0409 01:14:17.880623    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:17.880623    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:17.880669    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:18.376308    7488 type.go:168] "Request Body" body=""
	I0409 01:14:18.376308    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:18.376308    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:18.376308    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:18.376308    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:18.380485    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:18.380519    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:18.380519    7488 round_trippers.go:587]     Audit-Id: 8262bd89-bf5e-4951-be93-7eb4a8156f5a
	I0409 01:14:18.380519    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:18.380519    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:18.380519    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:18.380519    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:18.380519    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:18 GMT
	I0409 01:14:18.381086    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:18.381417    7488 node_ready.go:53] node "multinode-611500" has status "Ready":"False"
	I0409 01:14:18.875942    7488 type.go:168] "Request Body" body=""
	I0409 01:14:18.875972    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:18.875972    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:18.875972    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:18.875972    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:18.880930    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:18.880930    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:18.880930    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:18 GMT
	I0409 01:14:18.880930    7488 round_trippers.go:587]     Audit-Id: 4e81f934-494c-436c-877f-8a8e32822b3c
	I0409 01:14:18.880930    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:18.880930    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:18.880930    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:18.880930    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:18.881559    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:19.376859    7488 type.go:168] "Request Body" body=""
	I0409 01:14:19.377014    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:19.377014    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:19.377014    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:19.377014    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:19.381388    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:19.381519    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:19.381519    7488 round_trippers.go:587]     Audit-Id: 295eab8e-a84c-4cc9-bc58-b5a7fcaa4eee
	I0409 01:14:19.381519    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:19.381519    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:19.381519    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:19.381519    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:19.381519    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:19 GMT
	I0409 01:14:19.381892    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:19.876049    7488 type.go:168] "Request Body" body=""
	I0409 01:14:19.876049    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:19.876049    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:19.876049    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:19.876407    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:19.882533    7488 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0409 01:14:19.882624    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:19.882624    7488 round_trippers.go:587]     Audit-Id: 19dd1885-2d2f-4997-9349-5d930c23f77f
	I0409 01:14:19.882624    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:19.882684    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:19.882684    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:19.882684    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:19.882708    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:19 GMT
	I0409 01:14:19.884440    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:20.376575    7488 type.go:168] "Request Body" body=""
	I0409 01:14:20.376575    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:20.376575    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:20.376575    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:20.376575    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:20.380056    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:20.380056    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:20.380204    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:20.380204    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:20.380204    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:20.380204    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:20.380204    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:20 GMT
	I0409 01:14:20.380204    7488 round_trippers.go:587]     Audit-Id: d44198ba-63c2-4dcf-ba97-994666a9cf58
	I0409 01:14:20.380364    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:20.877005    7488 type.go:168] "Request Body" body=""
	I0409 01:14:20.877005    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:20.877005    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:20.877005    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:20.877005    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:20.881552    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:20.881552    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:20.881552    7488 round_trippers.go:587]     Audit-Id: 204aed9f-006c-438c-894c-6c70826a68e6
	I0409 01:14:20.881552    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:20.881552    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:20.881552    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:20.881552    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:20.881552    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:20 GMT
	I0409 01:14:20.882233    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:20.882419    7488 node_ready.go:53] node "multinode-611500" has status "Ready":"False"
	I0409 01:14:21.375520    7488 type.go:168] "Request Body" body=""
	I0409 01:14:21.375520    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:21.375520    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:21.375520    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:21.375520    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:21.381557    7488 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0409 01:14:21.381635    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:21.381635    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:21.381635    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:21.381635    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:21.381635    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:21.381635    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:21 GMT
	I0409 01:14:21.381635    7488 round_trippers.go:587]     Audit-Id: 21c63af6-60a4-420c-aa59-14f090cba1c6
	I0409 01:14:21.382576    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:21.876421    7488 type.go:168] "Request Body" body=""
	I0409 01:14:21.876421    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:21.876421    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:21.876421    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:21.876421    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:21.881007    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:21.881063    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:21.881063    7488 round_trippers.go:587]     Audit-Id: 77d4cf1d-e7c9-4957-b93f-bb82f50009de
	I0409 01:14:21.881063    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:21.881063    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:21.881063    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:21.881063    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:21.881063    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:21 GMT
	I0409 01:14:21.881434    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:22.376630    7488 type.go:168] "Request Body" body=""
	I0409 01:14:22.376630    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:22.376630    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:22.376630    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:22.376630    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:22.381213    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:22.381213    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:22.381213    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:22.381213    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:22.381213    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:22.381213    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:22.381213    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:22 GMT
	I0409 01:14:22.381213    7488 round_trippers.go:587]     Audit-Id: 1a0200a0-cf44-4b10-a750-99add4779cf5
	I0409 01:14:22.381586    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:22.876370    7488 type.go:168] "Request Body" body=""
	I0409 01:14:22.876370    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:22.876370    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:22.876370    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:22.876370    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:22.881439    7488 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0409 01:14:22.881508    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:22.881508    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:22.881508    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:22.881508    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:22.881508    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:22.881508    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:22 GMT
	I0409 01:14:22.881588    7488 round_trippers.go:587]     Audit-Id: cdbdc35d-6f3e-433f-a040-a905d13a13c9
	I0409 01:14:22.882003    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:23.375586    7488 type.go:168] "Request Body" body=""
	I0409 01:14:23.375586    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:23.375586    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:23.375586    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:23.375586    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:23.379943    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:23.379943    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:23.379943    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:23.380106    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:23.380106    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:23 GMT
	I0409 01:14:23.380106    7488 round_trippers.go:587]     Audit-Id: 85457c91-3b37-4c5f-a2ea-f20e3ae074b7
	I0409 01:14:23.380106    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:23.380106    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:23.380447    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:23.380765    7488 node_ready.go:53] node "multinode-611500" has status "Ready":"False"
	I0409 01:14:23.876110    7488 type.go:168] "Request Body" body=""
	I0409 01:14:23.876110    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:23.876110    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:23.876110    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:23.876110    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:23.879671    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:23.879671    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:23.879671    7488 round_trippers.go:587]     Audit-Id: 4c98b0ef-ca36-469d-a4e3-ebd0477fee9b
	I0409 01:14:23.879671    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:23.879671    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:23.879671    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:23.879671    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:23.879671    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:23 GMT
	I0409 01:14:23.879671    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:24.376115    7488 type.go:168] "Request Body" body=""
	I0409 01:14:24.376115    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:24.376115    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:24.376115    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:24.376115    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:24.380031    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:24.380120    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:24.380120    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:24 GMT
	I0409 01:14:24.380120    7488 round_trippers.go:587]     Audit-Id: 5d477377-182b-412c-b66a-436fbb744098
	I0409 01:14:24.380120    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:24.380120    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:24.380120    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:24.380120    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:24.380552    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:24.876183    7488 type.go:168] "Request Body" body=""
	I0409 01:14:24.876183    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:24.876183    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:24.876183    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:24.876183    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:24.885586    7488 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0409 01:14:24.885586    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:24.885586    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:24.885586    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:24.885586    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:24.885586    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:24 GMT
	I0409 01:14:24.885586    7488 round_trippers.go:587]     Audit-Id: 94086975-1f25-4412-80af-cdc55c26fb66
	I0409 01:14:24.885586    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:24.885977    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:25.375506    7488 type.go:168] "Request Body" body=""
	I0409 01:14:25.375506    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:25.375506    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:25.375506    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:25.375506    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:25.379480    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:25.379480    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:25.379480    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:25.379480    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:25.379480    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:25 GMT
	I0409 01:14:25.379480    7488 round_trippers.go:587]     Audit-Id: 0551d055-b670-41c8-92aa-76d189743da8
	I0409 01:14:25.379480    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:25.379480    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:25.380105    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:25.876064    7488 type.go:168] "Request Body" body=""
	I0409 01:14:25.876064    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:25.876064    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:25.876064    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:25.876064    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:25.880713    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:25.880713    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:25.880713    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:25.880786    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:25 GMT
	I0409 01:14:25.880786    7488 round_trippers.go:587]     Audit-Id: b27b1650-8b0e-4266-b1e6-18efc3e60cfc
	I0409 01:14:25.880786    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:25.880786    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:25.880786    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:25.881897    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:25.882307    7488 node_ready.go:53] node "multinode-611500" has status "Ready":"False"
	I0409 01:14:26.377014    7488 type.go:168] "Request Body" body=""
	I0409 01:14:26.377014    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:26.377014    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:26.377014    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:26.377014    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:26.381624    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:26.381624    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:26.381624    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:26.381624    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:26.381624    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:26.381624    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:26 GMT
	I0409 01:14:26.381624    7488 round_trippers.go:587]     Audit-Id: db931c8f-cc39-4187-834d-99316b10e1b3
	I0409 01:14:26.381624    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:26.382541    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:26.876463    7488 type.go:168] "Request Body" body=""
	I0409 01:14:26.877143    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:26.877143    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:26.877143    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:26.877143    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:26.881795    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:26.881795    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:26.881795    7488 round_trippers.go:587]     Audit-Id: 5213cf94-4a3b-4e7e-93a0-679113b15edf
	I0409 01:14:26.881795    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:26.881795    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:26.881795    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:26.881795    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:26.881795    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:26 GMT
	I0409 01:14:26.882369    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:27.375859    7488 type.go:168] "Request Body" body=""
	I0409 01:14:27.375859    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:27.375859    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:27.375859    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:27.375859    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:27.380876    7488 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0409 01:14:27.380876    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:27.380876    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:27.380876    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:27 GMT
	I0409 01:14:27.380876    7488 round_trippers.go:587]     Audit-Id: 68783c3e-b04b-429d-a668-f83c1081a1e0
	I0409 01:14:27.380876    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:27.380876    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:27.380876    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:27.381413    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:27.875958    7488 type.go:168] "Request Body" body=""
	I0409 01:14:27.875958    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:27.875958    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:27.875958    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:27.875958    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:27.880265    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:27.880418    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:27.880439    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:27 GMT
	I0409 01:14:27.880439    7488 round_trippers.go:587]     Audit-Id: 07d6adbd-aba1-4892-8e92-581c30fcc1a4
	I0409 01:14:27.880439    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:27.880439    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:27.880439    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:27.880439    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:27.880765    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 25 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 38  33 32 38 00 42 08 08 8d  |34242.18328.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22910 chars]
	 >
	I0409 01:14:28.375532    7488 type.go:168] "Request Body" body=""
	I0409 01:14:28.376184    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:28.376252    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:28.376282    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:28.376282    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:28.381142    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:28.381199    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:28.381222    7488 round_trippers.go:587]     Audit-Id: b0853d67-9295-438f-ba1c-6010949a0021
	I0409 01:14:28.381222    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:28.381222    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:28.381222    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:28.381282    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:28.381282    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:28 GMT
	I0409 01:14:28.381381    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:28.381381    7488 node_ready.go:49] node "multinode-611500" has status "Ready":"True"
	I0409 01:14:28.381381    7488 node_ready.go:38] duration metric: took 12.0060864s for node "multinode-611500" to be "Ready" ...
	I0409 01:14:28.381381    7488 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0409 01:14:28.381919    7488 type.go:204] "Request Body" body=""
	I0409 01:14:28.381919    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods
	I0409 01:14:28.382032    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:28.382032    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:28.382068    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:28.386760    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:28.386760    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:28.386760    7488 round_trippers.go:587]     Audit-Id: 04f4efb4-d390-4732-9a48-eb11d9ca34dc
	I0409 01:14:28.386760    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:28.386760    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:28.386760    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:28.386760    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:28.386760    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:28 GMT
	I0409 01:14:28.388973    7488 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 db ee 03 0a  0a 0a 00 12 04 31 39 35  |ist..........195|
		00000020  39 1a 00 12 86 29 0a 99  19 0a 18 63 6f 72 65 64  |9....).....cored|
		00000030  6e 73 2d 36 36 38 64 36  62 66 39 62 63 2d 64 35  |ns-668d6bf9bc-d5|
		00000040  34 73 34 12 13 63 6f 72  65 64 6e 73 2d 36 36 38  |4s4..coredns-668|
		00000050  64 36 62 66 39 62 63 2d  1a 0b 6b 75 62 65 2d 73  |d6bf9bc-..kube-s|
		00000060  79 73 74 65 6d 22 00 2a  24 31 32 34 33 31 66 32  |ystem".*$12431f2|
		00000070  37 2d 37 65 34 65 2d 34  31 63 39 2d 38 64 35 34  |7-7e4e-41c9-8d54|
		00000080  2d 62 63 37 62 65 32 30  37 34 62 39 63 32 04 31  |-bc7be2074b9c2.1|
		00000090  38 35 39 38 00 42 08 08  96 88 d7 bf 06 10 00 5a  |8598.B.........Z|
		000000a0  13 0a 07 6b 38 73 2d 61  70 70 12 08 6b 75 62 65  |...k8s-app..kube|
		000000b0  2d 64 6e 73 5a 1f 0a 11  70 6f 64 2d 74 65 6d 70  |-dnsZ...pod-temp|
		000000c0  6c 61 74 65 2d 68 61 73  68 12 0a 36 36 38 64 36  |late-hash..668d [truncated 311806 chars]
	 >
	I0409 01:14:28.390554    7488 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-d54s4" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:28.390708    7488 type.go:168] "Request Body" body=""
	I0409 01:14:28.390774    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-d54s4
	I0409 01:14:28.390774    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:28.390816    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:28.390816    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:28.393521    7488 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 01:14:28.393521    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:28.393521    7488 round_trippers.go:587]     Audit-Id: 0f0e8574-9f44-43a3-a8db-5ac0372ec914
	I0409 01:14:28.393521    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:28.393521    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:28.393521    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:28.393521    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:28.393521    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:28 GMT
	I0409 01:14:28.393521    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  86 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 64 35 34 73 34 12  |68d6bf9bc-d54s4.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 31 32 34  33 31 66 32 37 2d 37 65  |m".*$12431f27-7e|
		00000060  34 65 2d 34 31 63 39 2d  38 64 35 34 2d 62 63 37  |4e-41c9-8d54-bc7|
		00000070  62 65 32 30 37 34 62 39  63 32 04 31 38 35 39 38  |be2074b9c2.18598|
		00000080  00 42 08 08 96 88 d7 bf  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25042 chars]
	 >
	I0409 01:14:28.394707    7488 type.go:168] "Request Body" body=""
	I0409 01:14:28.394707    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:28.394707    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:28.394860    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:28.394860    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:28.398572    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:28.398572    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:28.398572    7488 round_trippers.go:587]     Audit-Id: 289ef060-dbe6-4413-82fb-7eeedd979218
	I0409 01:14:28.398572    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:28.398572    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:28.398572    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:28.398572    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:28.398572    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:28 GMT
	I0409 01:14:28.398572    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:28.891199    7488 type.go:168] "Request Body" body=""
	I0409 01:14:28.891385    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-d54s4
	I0409 01:14:28.891385    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:28.891385    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:28.891385    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:28.898186    7488 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0409 01:14:28.898186    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:28.898186    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:28.898186    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:28 GMT
	I0409 01:14:28.898186    7488 round_trippers.go:587]     Audit-Id: 3836224a-7ba3-401a-9b4e-929a4538dc6e
	I0409 01:14:28.898186    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:28.898721    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:28.898721    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:28.898973    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  86 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 64 35 34 73 34 12  |68d6bf9bc-d54s4.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 31 32 34  33 31 66 32 37 2d 37 65  |m".*$12431f27-7e|
		00000060  34 65 2d 34 31 63 39 2d  38 64 35 34 2d 62 63 37  |4e-41c9-8d54-bc7|
		00000070  62 65 32 30 37 34 62 39  63 32 04 31 38 35 39 38  |be2074b9c2.18598|
		00000080  00 42 08 08 96 88 d7 bf  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25042 chars]
	 >
	I0409 01:14:28.898973    7488 type.go:168] "Request Body" body=""
	I0409 01:14:28.898973    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:28.898973    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:28.898973    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:28.899578    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:28.902952    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:28.903952    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:28.903952    7488 round_trippers.go:587]     Audit-Id: 1732fa1b-762c-4a10-a73f-97b148c81258
	I0409 01:14:28.903952    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:28.903952    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:28.903952    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:28.903952    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:28.903952    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:28 GMT
	I0409 01:14:28.903952    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:29.391284    7488 type.go:168] "Request Body" body=""
	I0409 01:14:29.392275    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-d54s4
	I0409 01:14:29.392275    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:29.392275    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:29.392418    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:29.398539    7488 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0409 01:14:29.398675    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:29.398675    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:29.398675    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:29 GMT
	I0409 01:14:29.398675    7488 round_trippers.go:587]     Audit-Id: 084a06a0-502c-46e1-9f87-a24fa3b27639
	I0409 01:14:29.398675    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:29.398675    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:29.398675    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:29.399267    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  86 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 64 35 34 73 34 12  |68d6bf9bc-d54s4.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 31 32 34  33 31 66 32 37 2d 37 65  |m".*$12431f27-7e|
		00000060  34 65 2d 34 31 63 39 2d  38 64 35 34 2d 62 63 37  |4e-41c9-8d54-bc7|
		00000070  62 65 32 30 37 34 62 39  63 32 04 31 38 35 39 38  |be2074b9c2.18598|
		00000080  00 42 08 08 96 88 d7 bf  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25042 chars]
	 >
	I0409 01:14:29.399509    7488 type.go:168] "Request Body" body=""
	I0409 01:14:29.399509    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:29.399509    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:29.399509    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:29.399509    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:29.401949    7488 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 01:14:29.401949    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:29.401949    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:29 GMT
	I0409 01:14:29.401949    7488 round_trippers.go:587]     Audit-Id: 19ea9810-aae3-42f0-9d68-5b0e443b7199
	I0409 01:14:29.401949    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:29.401949    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:29.401949    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:29.401949    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:29.403045    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:29.892182    7488 type.go:168] "Request Body" body=""
	I0409 01:14:29.892182    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-d54s4
	I0409 01:14:29.892182    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:29.892182    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:29.892182    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:29.895964    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:29.895964    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:29.895964    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:29.895964    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:29.895964    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:29.895964    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:29 GMT
	I0409 01:14:29.895964    7488 round_trippers.go:587]     Audit-Id: 99f50467-2af0-4b65-b7f7-71303fb4b702
	I0409 01:14:29.895964    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:29.895964    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  86 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 64 35 34 73 34 12  |68d6bf9bc-d54s4.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 31 32 34  33 31 66 32 37 2d 37 65  |m".*$12431f27-7e|
		00000060  34 65 2d 34 31 63 39 2d  38 64 35 34 2d 62 63 37  |4e-41c9-8d54-bc7|
		00000070  62 65 32 30 37 34 62 39  63 32 04 31 38 35 39 38  |be2074b9c2.18598|
		00000080  00 42 08 08 96 88 d7 bf  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25042 chars]
	 >
	I0409 01:14:29.896845    7488 type.go:168] "Request Body" body=""
	I0409 01:14:29.896899    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:29.896899    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:29.896899    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:29.896899    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:29.899759    7488 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 01:14:29.899863    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:29.899902    7488 round_trippers.go:587]     Audit-Id: df33a631-e5f4-4aa6-a163-a5213fcbfd56
	I0409 01:14:29.899902    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:29.899902    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:29.899902    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:29.899902    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:29.899902    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:29 GMT
	I0409 01:14:29.900213    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:30.391489    7488 type.go:168] "Request Body" body=""
	I0409 01:14:30.391489    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-d54s4
	I0409 01:14:30.391489    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:30.391489    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:30.391489    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:30.396311    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:30.396311    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:30.396448    7488 round_trippers.go:587]     Audit-Id: c34e0e79-0ff8-4803-9c66-0cfc740158d6
	I0409 01:14:30.396473    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:30.396473    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:30.396473    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:30.396473    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:30.396473    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:30 GMT
	I0409 01:14:30.396665    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  86 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 64 35 34 73 34 12  |68d6bf9bc-d54s4.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 31 32 34  33 31 66 32 37 2d 37 65  |m".*$12431f27-7e|
		00000060  34 65 2d 34 31 63 39 2d  38 64 35 34 2d 62 63 37  |4e-41c9-8d54-bc7|
		00000070  62 65 32 30 37 34 62 39  63 32 04 31 38 35 39 38  |be2074b9c2.18598|
		00000080  00 42 08 08 96 88 d7 bf  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25042 chars]
	 >
	I0409 01:14:30.396665    7488 type.go:168] "Request Body" body=""
	I0409 01:14:30.396665    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:30.396665    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:30.396665    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:30.396665    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:30.399865    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:30.399945    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:30.399945    7488 round_trippers.go:587]     Audit-Id: d5678d61-39a5-4a06-bdba-26f94c7b8ca0
	I0409 01:14:30.399945    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:30.399945    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:30.399945    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:30.399945    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:30.399945    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:30 GMT
	I0409 01:14:30.401119    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:30.401330    7488 pod_ready.go:103] pod "coredns-668d6bf9bc-d54s4" in "kube-system" namespace has status "Ready":"False"
	I0409 01:14:30.890758    7488 type.go:168] "Request Body" body=""
	I0409 01:14:30.890758    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-d54s4
	I0409 01:14:30.890758    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:30.890758    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:30.890758    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:30.894765    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:30.894765    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:30.894765    7488 round_trippers.go:587]     Audit-Id: ede2eb51-16e8-4ee3-9a9a-a9d13afa88ca
	I0409 01:14:30.894765    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:30.894765    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:30.894765    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:30.894765    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:30.894765    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:30 GMT
	I0409 01:14:30.895757    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  86 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 64 35 34 73 34 12  |68d6bf9bc-d54s4.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 31 32 34  33 31 66 32 37 2d 37 65  |m".*$12431f27-7e|
		00000060  34 65 2d 34 31 63 39 2d  38 64 35 34 2d 62 63 37  |4e-41c9-8d54-bc7|
		00000070  62 65 32 30 37 34 62 39  63 32 04 31 38 35 39 38  |be2074b9c2.18598|
		00000080  00 42 08 08 96 88 d7 bf  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25042 chars]
	 >
	I0409 01:14:30.895757    7488 type.go:168] "Request Body" body=""
	I0409 01:14:30.895757    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:30.895757    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:30.895757    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:30.895757    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:30.898777    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:30.898777    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:30.898777    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:30.898777    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:30 GMT
	I0409 01:14:30.898777    7488 round_trippers.go:587]     Audit-Id: 4176e11a-6b50-4bab-9a15-917e42a3ebd6
	I0409 01:14:30.898777    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:30.898777    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:30.898777    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:30.899757    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:31.391403    7488 type.go:168] "Request Body" body=""
	I0409 01:14:31.391403    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-d54s4
	I0409 01:14:31.391403    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:31.391403    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:31.391403    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:31.398287    7488 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0409 01:14:31.398356    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:31.398356    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:31.398434    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:31.398434    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:31 GMT
	I0409 01:14:31.398434    7488 round_trippers.go:587]     Audit-Id: e14b7521-93b9-4f36-ab46-bea874f56067
	I0409 01:14:31.398434    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:31.398434    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:31.398434    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  86 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 64 35 34 73 34 12  |68d6bf9bc-d54s4.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 31 32 34  33 31 66 32 37 2d 37 65  |m".*$12431f27-7e|
		00000060  34 65 2d 34 31 63 39 2d  38 64 35 34 2d 62 63 37  |4e-41c9-8d54-bc7|
		00000070  62 65 32 30 37 34 62 39  63 32 04 31 38 35 39 38  |be2074b9c2.18598|
		00000080  00 42 08 08 96 88 d7 bf  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25042 chars]
	 >
	I0409 01:14:31.399210    7488 type.go:168] "Request Body" body=""
	I0409 01:14:31.399327    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:31.399381    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:31.399381    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:31.399381    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:31.405016    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:31.405098    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:31.405098    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:31.405098    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:31 GMT
	I0409 01:14:31.405098    7488 round_trippers.go:587]     Audit-Id: 6f56fd36-d522-4540-8719-f9524d02f8cf
	I0409 01:14:31.405098    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:31.405175    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:31.405175    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:31.405510    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:31.891283    7488 type.go:168] "Request Body" body=""
	I0409 01:14:31.891283    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-d54s4
	I0409 01:14:31.891283    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:31.891283    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:31.891283    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:31.895851    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:31.895922    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:31.895922    7488 round_trippers.go:587]     Audit-Id: 2e613601-421b-42bd-b539-afe03d13c444
	I0409 01:14:31.895922    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:31.896013    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:31.896013    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:31.896013    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:31.896013    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:31 GMT
	I0409 01:14:31.896454    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  86 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 64 35 34 73 34 12  |68d6bf9bc-d54s4.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 31 32 34  33 31 66 32 37 2d 37 65  |m".*$12431f27-7e|
		00000060  34 65 2d 34 31 63 39 2d  38 64 35 34 2d 62 63 37  |4e-41c9-8d54-bc7|
		00000070  62 65 32 30 37 34 62 39  63 32 04 31 38 35 39 38  |be2074b9c2.18598|
		00000080  00 42 08 08 96 88 d7 bf  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25042 chars]
	 >
	I0409 01:14:31.896766    7488 type.go:168] "Request Body" body=""
	I0409 01:14:31.896902    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:31.896929    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:31.896929    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:31.896929    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:31.899757    7488 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 01:14:31.899847    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:31.899847    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:31.899847    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:31.899847    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:31.899847    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:31.899847    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:31 GMT
	I0409 01:14:31.899847    7488 round_trippers.go:587]     Audit-Id: 4ef062de-f7dd-47b0-85eb-056075b84bcd
	I0409 01:14:31.900141    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:32.391371    7488 type.go:168] "Request Body" body=""
	I0409 01:14:32.391371    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-d54s4
	I0409 01:14:32.391371    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:32.391371    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:32.391371    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:32.396364    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:32.396364    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:32.396364    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:32.396364    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:32.396364    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:32.396364    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:32.396364    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:32 GMT
	I0409 01:14:32.396364    7488 round_trippers.go:587]     Audit-Id: c1517004-eff2-42db-b483-55c12e64abc7
	I0409 01:14:32.396364    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  c7 28 0a af 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.(.....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 64 35 34 73 34 12  |68d6bf9bc-d54s4.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 31 32 34  33 31 66 32 37 2d 37 65  |m".*$12431f27-7e|
		00000060  34 65 2d 34 31 63 39 2d  38 64 35 34 2d 62 63 37  |4e-41c9-8d54-bc7|
		00000070  62 65 32 30 37 34 62 39  63 32 04 31 39 37 36 38  |be2074b9c2.19768|
		00000080  00 42 08 08 96 88 d7 bf  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 24727 chars]
	 >
	I0409 01:14:32.396364    7488 type.go:168] "Request Body" body=""
	I0409 01:14:32.396364    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:32.396364    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:32.396364    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:32.396364    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:32.402366    7488 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0409 01:14:32.402366    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:32.402877    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:32.402877    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:32 GMT
	I0409 01:14:32.402877    7488 round_trippers.go:587]     Audit-Id: 6d530187-f48a-482b-9c50-4006f3a3fdee
	I0409 01:14:32.402877    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:32.402877    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:32.402877    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:32.403439    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:32.403439    7488 pod_ready.go:93] pod "coredns-668d6bf9bc-d54s4" in "kube-system" namespace has status "Ready":"True"
	I0409 01:14:32.403439    7488 pod_ready.go:82] duration metric: took 4.0127507s for pod "coredns-668d6bf9bc-d54s4" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:32.403439    7488 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-611500" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:32.403439    7488 type.go:168] "Request Body" body=""
	I0409 01:14:32.403439    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-611500
	I0409 01:14:32.403439    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:32.403439    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:32.403439    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:32.406834    7488 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 01:14:32.406834    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:32.406834    7488 round_trippers.go:587]     Audit-Id: b8d7a01e-0602-444d-a1fc-258b1f888a39
	I0409 01:14:32.406834    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:32.406834    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:32.406834    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:32.406834    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:32.406834    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:32 GMT
	I0409 01:14:32.406834    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  8c 2c 0a a1 1a 0a 15 65  74 63 64 2d 6d 75 6c 74  |.,.....etcd-mult|
		00000020  69 6e 6f 64 65 2d 36 31  31 35 30 30 12 00 1a 0b  |inode-611500....|
		00000030  6b 75 62 65 2d 73 79 73  74 65 6d 22 00 2a 24 65  |kube-system".*$e|
		00000040  36 62 33 39 62 31 61 2d  61 36 64 35 2d 34 36 64  |6b39b1a-a6d5-46d|
		00000050  31 2d 61 35 36 61 2d 32  34 33 63 39 62 62 36 66  |1-a56a-243c9bb6f|
		00000060  35 36 33 32 04 31 39 34  39 38 00 42 08 08 e6 93  |5632.19498.B....|
		00000070  d7 bf 06 10 00 5a 11 0a  09 63 6f 6d 70 6f 6e 65  |.....Z...compone|
		00000080  6e 74 12 04 65 74 63 64  5a 15 0a 04 74 69 65 72  |nt..etcdZ...tier|
		00000090  12 0d 63 6f 6e 74 72 6f  6c 2d 70 6c 61 6e 65 62  |..control-planeb|
		000000a0  50 0a 30 6b 75 62 65 61  64 6d 2e 6b 75 62 65 72  |P.0kubeadm.kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 65 74 63 64 2e 61 64  |netes.io/etcd.ad|
		000000c0  76 65 72 74 69 73 65 2d  63 6c 69 65 6e 74 2d 75  |vertise-client- [truncated 27007 chars]
	 >
	I0409 01:14:32.407514    7488 type.go:168] "Request Body" body=""
	I0409 01:14:32.407540    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:32.407597    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:32.407597    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:32.407597    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:32.409506    7488 round_trippers.go:581] Response Status: 200 OK in 1 milliseconds
	I0409 01:14:32.409506    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:32.409506    7488 round_trippers.go:587]     Audit-Id: cc0831f9-8cdf-4941-8476-0aa607c5648b
	I0409 01:14:32.409506    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:32.409506    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:32.409506    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:32.409506    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:32.409506    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:32 GMT
	I0409 01:14:32.409506    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:32.410137    7488 pod_ready.go:93] pod "etcd-multinode-611500" in "kube-system" namespace has status "Ready":"True"
	I0409 01:14:32.410137    7488 pod_ready.go:82] duration metric: took 6.6976ms for pod "etcd-multinode-611500" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:32.410233    7488 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-611500" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:32.410335    7488 type.go:168] "Request Body" body=""
	I0409 01:14:32.410360    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-611500
	I0409 01:14:32.410443    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:32.410443    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:32.410443    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:32.413957    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:32.413957    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:32.413957    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:32.413957    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:32.413957    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:32.413957    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:32.413957    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:32 GMT
	I0409 01:14:32.413957    7488 round_trippers.go:587]     Audit-Id: ae104c88-34aa-4a7e-9b1b-c0ad61a1374d
	I0409 01:14:32.414624    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  a8 36 0a b1 1c 0a 1f 6b  75 62 65 2d 61 70 69 73  |.6.....kube-apis|
		00000020  65 72 76 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |erver-multinode-|
		00000030  36 31 31 35 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |611500....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 66 39 39 32 34 37 35  |ystem".*$f992475|
		00000050  34 2d 66 38 63 35 2d 34  61 38 62 2d 39 64 61 32  |4-f8c5-4a8b-9da2|
		00000060  2d 32 33 64 38 30 39 36  61 35 65 63 66 32 04 31  |-23d8096a5ecf2.1|
		00000070  39 34 31 38 00 42 08 08  e6 93 d7 bf 06 10 00 5a  |9418.B.........Z|
		00000080  1b 0a 09 63 6f 6d 70 6f  6e 65 6e 74 12 0e 6b 75  |...component..ku|
		00000090  62 65 2d 61 70 69 73 65  72 76 65 72 5a 15 0a 04  |be-apiserverZ...|
		000000a0  74 69 65 72 12 0d 63 6f  6e 74 72 6f 6c 2d 70 6c  |tier..control-pl|
		000000b0  61 6e 65 62 57 0a 3f 6b  75 62 65 61 64 6d 2e 6b  |anebW.?kubeadm.k|
		000000c0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 6b 75 62  |ubernetes.io/ku [truncated 33418 chars]
	 >
	I0409 01:14:32.414808    7488 type.go:168] "Request Body" body=""
	I0409 01:14:32.414808    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:32.414808    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:32.414808    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:32.414808    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:32.417983    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:32.418034    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:32.418034    7488 round_trippers.go:587]     Audit-Id: 21c324f1-9bab-44d6-8694-3e92661e929f
	I0409 01:14:32.418077    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:32.418100    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:32.418125    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:32.418218    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:32.418395    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:32 GMT
	I0409 01:14:32.418511    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:32.418511    7488 pod_ready.go:93] pod "kube-apiserver-multinode-611500" in "kube-system" namespace has status "Ready":"True"
	I0409 01:14:32.418511    7488 pod_ready.go:82] duration metric: took 8.2419ms for pod "kube-apiserver-multinode-611500" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:32.418511    7488 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-611500" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:32.418511    7488 type.go:168] "Request Body" body=""
	I0409 01:14:32.419039    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:32.419102    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:32.419102    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:32.419102    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:32.422171    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:32.422171    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:32.422171    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:32.422171    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:32 GMT
	I0409 01:14:32.422171    7488 round_trippers.go:587]     Audit-Id: 670f263b-6be4-4125-b69b-34055ae2c84c
	I0409 01:14:32.422171    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:32.422171    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:32.422171    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:32.422171    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:32.422171    7488 type.go:168] "Request Body" body=""
	I0409 01:14:32.422171    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:32.422171    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:32.422171    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:32.422171    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:32.429096    7488 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0409 01:14:32.430116    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:32.430116    7488 round_trippers.go:587]     Audit-Id: 8c3f3a5c-2c59-4e30-837f-25dd36087c03
	I0409 01:14:32.430116    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:32.430116    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:32.430116    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:32.430116    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:32.430116    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:32 GMT
	I0409 01:14:32.430116    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:32.919082    7488 type.go:168] "Request Body" body=""
	I0409 01:14:32.919082    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:32.919082    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:32.919082    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:32.919082    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:32.923585    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:32.923667    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:32.923667    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:32.923667    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:32.923667    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:32 GMT
	I0409 01:14:32.923667    7488 round_trippers.go:587]     Audit-Id: 962775c5-3cdb-49e6-855f-4dcb9e551ab6
	I0409 01:14:32.923667    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:32.923667    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:32.924265    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:32.924672    7488 type.go:168] "Request Body" body=""
	I0409 01:14:32.924672    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:32.924672    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:32.924672    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:32.924672    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:32.927245    7488 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 01:14:32.927245    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:32.927245    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:32 GMT
	I0409 01:14:32.927245    7488 round_trippers.go:587]     Audit-Id: 39646721-cffc-457e-ace4-1f5cca8e1b17
	I0409 01:14:32.927245    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:32.927245    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:32.927245    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:32.927245    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:32.932649    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:33.419188    7488 type.go:168] "Request Body" body=""
	I0409 01:14:33.419188    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:33.419188    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:33.419188    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:33.419188    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:33.423081    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:33.423152    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:33.423152    7488 round_trippers.go:587]     Audit-Id: 078159cb-c7f7-4634-a520-538ff89e63a2
	I0409 01:14:33.423152    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:33.423152    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:33.423152    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:33.423152    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:33.423152    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:33 GMT
	I0409 01:14:33.423557    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:33.423918    7488 type.go:168] "Request Body" body=""
	I0409 01:14:33.423971    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:33.423971    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:33.424029    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:33.424029    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:33.427380    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:33.427467    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:33.427542    7488 round_trippers.go:587]     Audit-Id: dae36050-4a9b-4e57-92a8-bcdf8f5a25d5
	I0409 01:14:33.427542    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:33.427542    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:33.427542    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:33.427542    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:33.427542    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:33 GMT
	I0409 01:14:33.427599    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:33.919535    7488 type.go:168] "Request Body" body=""
	I0409 01:14:33.919535    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:33.920149    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:33.920149    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:33.920149    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:33.924949    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:33.925050    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:33.925050    7488 round_trippers.go:587]     Audit-Id: 3403a0fe-7d52-41b3-9498-1a8206ef33b2
	I0409 01:14:33.925050    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:33.925050    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:33.925050    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:33.925050    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:33.925108    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:33 GMT
	I0409 01:14:33.925520    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:33.925724    7488 type.go:168] "Request Body" body=""
	I0409 01:14:33.925724    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:33.925724    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:33.925724    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:33.925724    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:33.929401    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:33.929492    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:33.929492    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:33.929492    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:33.929492    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:33.929492    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:33.929492    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:33 GMT
	I0409 01:14:33.929492    7488 round_trippers.go:587]     Audit-Id: b07a4e6b-2fc3-4689-905e-8b2b706d4788
	I0409 01:14:33.929642    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:34.419230    7488 type.go:168] "Request Body" body=""
	I0409 01:14:34.419230    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:34.419230    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:34.419230    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:34.419230    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:34.423424    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:34.423499    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:34.423499    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:34.423499    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:34 GMT
	I0409 01:14:34.423499    7488 round_trippers.go:587]     Audit-Id: 97d7413a-630e-489e-aff4-701df6dfbf3b
	I0409 01:14:34.423499    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:34.423499    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:34.423499    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:34.423880    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:34.424031    7488 type.go:168] "Request Body" body=""
	I0409 01:14:34.424031    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:34.424031    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:34.424031    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:34.424031    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:34.429424    7488 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0409 01:14:34.429424    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:34.429424    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:34.429424    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:34.429424    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:34 GMT
	I0409 01:14:34.429424    7488 round_trippers.go:587]     Audit-Id: a9296c7b-0384-4e00-8304-c02fc9a82168
	I0409 01:14:34.429424    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:34.429424    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:34.430179    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:34.430379    7488 pod_ready.go:103] pod "kube-controller-manager-multinode-611500" in "kube-system" namespace has status "Ready":"False"
	I0409 01:14:34.919217    7488 type.go:168] "Request Body" body=""
	I0409 01:14:34.919217    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:34.919217    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:34.919217    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:34.919217    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:34.924490    7488 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0409 01:14:34.924490    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:34.924490    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:34.924490    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:34.924490    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:34 GMT
	I0409 01:14:34.924490    7488 round_trippers.go:587]     Audit-Id: bf703a23-0194-4295-b5a7-84ba94a961c1
	I0409 01:14:34.924490    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:34.924490    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:34.925343    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:34.925593    7488 type.go:168] "Request Body" body=""
	I0409 01:14:34.925667    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:34.925728    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:34.925748    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:34.925748    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:34.928112    7488 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 01:14:34.928112    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:34.928500    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:34.928500    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:34.928500    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:34.928500    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:34 GMT
	I0409 01:14:34.928500    7488 round_trippers.go:587]     Audit-Id: 76046b8b-fb86-4cd7-a662-46eb992451b7
	I0409 01:14:34.928500    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:34.928709    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:35.418954    7488 type.go:168] "Request Body" body=""
	I0409 01:14:35.418954    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:35.418954    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:35.418954    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:35.418954    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:35.423657    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:35.423766    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:35.423766    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:35.423766    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:35.423766    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:35.423766    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:35 GMT
	I0409 01:14:35.423766    7488 round_trippers.go:587]     Audit-Id: 2cc48a53-e9de-4741-a06b-105b349fb29f
	I0409 01:14:35.423766    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:35.424736    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:35.424974    7488 type.go:168] "Request Body" body=""
	I0409 01:14:35.424974    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:35.424974    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:35.424974    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:35.424974    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:35.427798    7488 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 01:14:35.427798    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:35.428248    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:35.428248    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:35.428248    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:35.428248    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:35 GMT
	I0409 01:14:35.428248    7488 round_trippers.go:587]     Audit-Id: 8943f520-268a-4de8-8464-c0aa1162a31b
	I0409 01:14:35.428248    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:35.428550    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:35.919091    7488 type.go:168] "Request Body" body=""
	I0409 01:14:35.919091    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:35.919091    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:35.919091    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:35.919091    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:35.923777    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:35.923777    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:35.923861    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:35.923861    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:35.923861    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:35.923861    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:35 GMT
	I0409 01:14:35.923861    7488 round_trippers.go:587]     Audit-Id: 2c9eb9c9-f247-400d-8b03-e051159a48cb
	I0409 01:14:35.923861    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:35.924510    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:35.924865    7488 type.go:168] "Request Body" body=""
	I0409 01:14:35.924865    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:35.924946    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:35.924946    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:35.924946    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:35.928711    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:35.928841    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:35.928910    7488 round_trippers.go:587]     Audit-Id: 61aa530f-353d-4a4e-ad35-5a6a18611261
	I0409 01:14:35.928910    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:35.928910    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:35.928910    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:35.928970    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:35.928970    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:35 GMT
	I0409 01:14:35.929216    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:36.419859    7488 type.go:168] "Request Body" body=""
	I0409 01:14:36.419963    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:36.419963    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:36.419963    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:36.420113    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:36.424738    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:36.424738    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:36.424794    7488 round_trippers.go:587]     Audit-Id: 11cf8e1d-ddf2-48b2-94ca-7e94145c59c3
	I0409 01:14:36.424794    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:36.424794    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:36.424794    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:36.424794    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:36.424794    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:36 GMT
	I0409 01:14:36.425266    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:36.425511    7488 type.go:168] "Request Body" body=""
	I0409 01:14:36.425511    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:36.425511    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:36.425511    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:36.425511    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:36.428660    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:36.428660    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:36.428660    7488 round_trippers.go:587]     Audit-Id: 7d6fd480-cc38-4d82-8966-42522a375db8
	I0409 01:14:36.428756    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:36.428756    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:36.428756    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:36.428756    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:36.428756    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:36 GMT
	I0409 01:14:36.429144    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:36.923871    7488 type.go:168] "Request Body" body=""
	I0409 01:14:36.924173    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:36.924173    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:36.924230    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:36.924230    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:36.926122    7488 round_trippers.go:581] Response Status: 200 OK in 1 milliseconds
	I0409 01:14:36.926122    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:36.926122    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:36 GMT
	I0409 01:14:36.926122    7488 round_trippers.go:587]     Audit-Id: a3a666c8-467f-4819-9810-b18125e83ac7
	I0409 01:14:36.926122    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:36.926122    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:36.926122    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:36.926122    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:36.926122    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:36.926122    7488 type.go:168] "Request Body" body=""
	I0409 01:14:36.926122    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:36.926122    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:36.926122    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:36.926122    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:36.934759    7488 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0409 01:14:36.934759    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:36.934759    7488 round_trippers.go:587]     Audit-Id: 04627fa6-cd1f-4487-85ed-557cd328d104
	I0409 01:14:36.934759    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:36.934759    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:36.934759    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:36.934759    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:36.934759    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:36 GMT
	I0409 01:14:36.934759    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:36.935314    7488 pod_ready.go:103] pod "kube-controller-manager-multinode-611500" in "kube-system" namespace has status "Ready":"False"
	I0409 01:14:37.419170    7488 type.go:168] "Request Body" body=""
	I0409 01:14:37.419170    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:37.419170    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:37.419170    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:37.419170    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:37.427982    7488 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0409 01:14:37.427982    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:37.427982    7488 round_trippers.go:587]     Audit-Id: 1ab6f036-148a-4662-bdd9-f0f87b3098b1
	I0409 01:14:37.427982    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:37.427982    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:37.427982    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:37.427982    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:37.427982    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:37 GMT
	I0409 01:14:37.429265    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:37.429433    7488 type.go:168] "Request Body" body=""
	I0409 01:14:37.429433    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:37.429433    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:37.429433    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:37.429433    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:37.432765    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:37.432765    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:37.432765    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:37 GMT
	I0409 01:14:37.432765    7488 round_trippers.go:587]     Audit-Id: 55e55312-ff41-4169-8158-e8b8ee91c920
	I0409 01:14:37.432765    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:37.432765    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:37.432765    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:37.432765    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:37.433469    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:37.919140    7488 type.go:168] "Request Body" body=""
	I0409 01:14:37.919140    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:37.919140    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:37.919140    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:37.919140    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:37.924094    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:37.924094    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:37.924196    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:37.924196    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:37.924196    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:37 GMT
	I0409 01:14:37.924233    7488 round_trippers.go:587]     Audit-Id: ffa81669-e661-4597-b247-7efb80ea595f
	I0409 01:14:37.924233    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:37.924233    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:37.924648    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:37.925009    7488 type.go:168] "Request Body" body=""
	I0409 01:14:37.925082    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:37.925168    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:37.925168    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:37.925198    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:37.928890    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:37.928890    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:37.929113    7488 round_trippers.go:587]     Audit-Id: 6e299306-b7b0-47c2-af66-30b46c1a40fb
	I0409 01:14:37.929113    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:37.929113    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:37.929113    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:37.929113    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:37.929113    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:37 GMT
	I0409 01:14:37.929113    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:38.420621    7488 type.go:168] "Request Body" body=""
	I0409 01:14:38.420732    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:38.420802    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:38.420802    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:38.420802    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:38.428413    7488 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0409 01:14:38.428507    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:38.428558    7488 round_trippers.go:587]     Audit-Id: a8271f83-ec34-4c62-9c6c-ef95332d1aa0
	I0409 01:14:38.428558    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:38.428558    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:38.428558    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:38.428558    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:38.428558    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:38 GMT
	I0409 01:14:38.429198    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:38.429621    7488 type.go:168] "Request Body" body=""
	I0409 01:14:38.429693    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:38.429693    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:38.429693    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:38.429693    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:38.432274    7488 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 01:14:38.432274    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:38.432274    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:38.432274    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:38 GMT
	I0409 01:14:38.432274    7488 round_trippers.go:587]     Audit-Id: a843f51e-1c95-4132-9b82-f76fe5c28727
	I0409 01:14:38.432274    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:38.432274    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:38.432274    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:38.432274    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:38.919551    7488 type.go:168] "Request Body" body=""
	I0409 01:14:38.919551    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:38.919551    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:38.919551    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:38.919551    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:38.925823    7488 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0409 01:14:38.925823    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:38.925823    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:38.925823    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:38.925823    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:38 GMT
	I0409 01:14:38.925823    7488 round_trippers.go:587]     Audit-Id: 20a82344-dd93-4cfd-a74e-2935de0c6c74
	I0409 01:14:38.925823    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:38.925823    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:38.926509    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:38.926759    7488 type.go:168] "Request Body" body=""
	I0409 01:14:38.926839    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:38.926839    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:38.926839    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:38.926839    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:38.929525    7488 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 01:14:38.929525    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:38.929525    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:38.929525    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:38.929525    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:38.929525    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:38 GMT
	I0409 01:14:38.929525    7488 round_trippers.go:587]     Audit-Id: 4059015e-45e4-44d2-932d-413b50e923d3
	I0409 01:14:38.929525    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:38.929525    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:39.419121    7488 type.go:168] "Request Body" body=""
	I0409 01:14:39.419121    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:39.419121    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:39.419121    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:39.419121    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:39.423755    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:39.423817    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:39.423817    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:39.423817    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:39.423817    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:39.423817    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:39 GMT
	I0409 01:14:39.423817    7488 round_trippers.go:587]     Audit-Id: 05b993c6-cc1b-4241-bd28-58c3be21f462
	I0409 01:14:39.423817    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:39.423817    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:39.424479    7488 type.go:168] "Request Body" body=""
	I0409 01:14:39.424571    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:39.424636    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:39.424743    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:39.424765    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:39.427419    7488 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 01:14:39.427602    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:39.427602    7488 round_trippers.go:587]     Audit-Id: f20a4c2a-8e38-4743-9008-99e62a051fc1
	I0409 01:14:39.427602    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:39.427602    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:39.427602    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:39.427602    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:39.427602    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:39 GMT
	I0409 01:14:39.427789    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:39.427789    7488 pod_ready.go:103] pod "kube-controller-manager-multinode-611500" in "kube-system" namespace has status "Ready":"False"
	I0409 01:14:39.919417    7488 type.go:168] "Request Body" body=""
	I0409 01:14:39.919939    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:39.919939    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:39.919939    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:39.919939    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:39.924015    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:39.924096    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:39.924096    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:39.924096    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:39 GMT
	I0409 01:14:39.924096    7488 round_trippers.go:587]     Audit-Id: b230bb0d-3b66-4857-be8e-de25734a32aa
	I0409 01:14:39.924096    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:39.924096    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:39.924096    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:39.924558    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:39.924932    7488 type.go:168] "Request Body" body=""
	I0409 01:14:39.924991    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:39.924991    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:39.924991    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:39.924991    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:39.927740    7488 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 01:14:39.927740    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:39.928438    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:39.928438    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:39 GMT
	I0409 01:14:39.928438    7488 round_trippers.go:587]     Audit-Id: 05d24e90-6fe9-479b-bb02-fa3a7d6a6092
	I0409 01:14:39.928438    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:39.928438    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:39.928438    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:39.928761    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:40.419147    7488 type.go:168] "Request Body" body=""
	I0409 01:14:40.419147    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:40.419147    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:40.419147    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:40.419147    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:40.423552    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:40.423552    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:40.423552    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:40.423659    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:40.423659    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:40.423659    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:40.423659    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:40 GMT
	I0409 01:14:40.423731    7488 round_trippers.go:587]     Audit-Id: d6f08b33-6e06-4b8f-ae44-18511b9a99fb
	I0409 01:14:40.423987    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:40.424230    7488 type.go:168] "Request Body" body=""
	I0409 01:14:40.424338    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:40.424338    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:40.424338    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:40.424338    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:40.426719    7488 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 01:14:40.426832    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:40.426832    7488 round_trippers.go:587]     Audit-Id: 682e451c-41fa-4fd1-9e85-28baf5d12014
	I0409 01:14:40.426832    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:40.426832    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:40.426832    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:40.426832    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:40.426832    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:40 GMT
	I0409 01:14:40.427234    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:40.918774    7488 type.go:168] "Request Body" body=""
	I0409 01:14:40.918774    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:40.918774    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:40.918774    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:40.918774    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:40.923763    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:40.923763    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:40.923763    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:40.923763    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:40 GMT
	I0409 01:14:40.923763    7488 round_trippers.go:587]     Audit-Id: 763ce7d5-45cb-4e11-8bd3-fdbb0518d83d
	I0409 01:14:40.923763    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:40.923763    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:40.923763    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:40.924418    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:40.924784    7488 type.go:168] "Request Body" body=""
	I0409 01:14:40.924947    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:40.924947    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:40.924947    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:40.924947    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:40.927293    7488 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 01:14:40.927293    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:40.927293    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:40.927293    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:40.927293    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:40.927293    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:40 GMT
	I0409 01:14:40.927293    7488 round_trippers.go:587]     Audit-Id: b5bfa7a4-77dc-4bcd-819a-3bf36e285e05
	I0409 01:14:40.927293    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:40.928686    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:41.418845    7488 type.go:168] "Request Body" body=""
	I0409 01:14:41.418845    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:41.418845    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:41.418845    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:41.418845    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:41.426056    7488 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0409 01:14:41.426056    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:41.426056    7488 round_trippers.go:587]     Audit-Id: 117217a5-8245-4e5b-a122-9cc38dc6aca8
	I0409 01:14:41.426056    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:41.426056    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:41.426056    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:41.426056    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:41.426056    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:41 GMT
	I0409 01:14:41.426778    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:41.426778    7488 type.go:168] "Request Body" body=""
	I0409 01:14:41.426778    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:41.426778    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:41.426778    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:41.426778    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:41.431090    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:41.431748    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:41.431748    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:41.431748    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:41.431748    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:41.431748    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:41.431748    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:41 GMT
	I0409 01:14:41.431824    7488 round_trippers.go:587]     Audit-Id: 71cb979d-a7eb-4ec1-a3ad-e6e5139d4d50
	I0409 01:14:41.432180    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:41.432297    7488 pod_ready.go:103] pod "kube-controller-manager-multinode-611500" in "kube-system" namespace has status "Ready":"False"
	I0409 01:14:41.919077    7488 type.go:168] "Request Body" body=""
	I0409 01:14:41.919077    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:41.919077    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:41.919077    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:41.919077    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:41.923056    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:41.924048    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:41.924048    7488 round_trippers.go:587]     Audit-Id: 4925a799-451f-4d81-bd04-abd243886971
	I0409 01:14:41.924048    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:41.924048    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:41.924048    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:41.924048    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:41.924048    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:41 GMT
	I0409 01:14:41.924048    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:41.924048    7488 type.go:168] "Request Body" body=""
	I0409 01:14:41.924048    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:41.924048    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:41.924048    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:41.924048    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:41.928532    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:41.928646    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:41.928646    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:41.928646    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:41.928697    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:41.928697    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:41.928697    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:41 GMT
	I0409 01:14:41.928697    7488 round_trippers.go:587]     Audit-Id: de86a095-8831-4dd6-aa3f-817a4c9b8247
	I0409 01:14:41.929028    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:42.419054    7488 type.go:168] "Request Body" body=""
	I0409 01:14:42.419439    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:42.419514    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:42.419543    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:42.419543    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:42.426083    7488 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0409 01:14:42.426083    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:42.426083    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:42.426083    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:42.426083    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:42.426083    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:42 GMT
	I0409 01:14:42.426260    7488 round_trippers.go:587]     Audit-Id: 8ca4ba7b-a70e-4339-95d5-5539ea0d5a84
	I0409 01:14:42.426260    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:42.426308    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:42.426967    7488 type.go:168] "Request Body" body=""
	I0409 01:14:42.426999    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:42.426999    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:42.426999    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:42.426999    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:42.430485    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:42.430485    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:42.430485    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:42.430485    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:42.430485    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:42.430485    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:42 GMT
	I0409 01:14:42.430485    7488 round_trippers.go:587]     Audit-Id: c0cf7400-ec2c-4055-84d6-80669e212fc4
	I0409 01:14:42.430485    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:42.430613    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:42.918760    7488 type.go:168] "Request Body" body=""
	I0409 01:14:42.918760    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:42.918760    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:42.918760    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:42.918760    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:42.923522    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:42.923686    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:42.923686    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:42.923686    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:42 GMT
	I0409 01:14:42.923686    7488 round_trippers.go:587]     Audit-Id: cf2797fd-717d-47f8-928e-cb2575b81215
	I0409 01:14:42.923686    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:42.923686    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:42.923686    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:42.923686    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:42.924504    7488 type.go:168] "Request Body" body=""
	I0409 01:14:42.924575    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:42.924575    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:42.924575    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:42.924575    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:42.927516    7488 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 01:14:42.927516    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:42.927516    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:42 GMT
	I0409 01:14:42.927516    7488 round_trippers.go:587]     Audit-Id: 43bcf67d-ce47-4ef6-96e5-d170d461c1c6
	I0409 01:14:42.927516    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:42.927650    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:42.927650    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:42.927650    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:42.927906    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:43.419674    7488 type.go:168] "Request Body" body=""
	I0409 01:14:43.419674    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:43.419674    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:43.419674    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:43.419674    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:43.423728    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:43.423728    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:43.423728    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:43 GMT
	I0409 01:14:43.423728    7488 round_trippers.go:587]     Audit-Id: 28523091-764c-4bf4-b928-ea8e1b9fce75
	I0409 01:14:43.423728    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:43.423728    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:43.423728    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:43.423728    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:43.423728    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:43.425124    7488 type.go:168] "Request Body" body=""
	I0409 01:14:43.425300    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:43.425300    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:43.425383    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:43.425383    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:43.429140    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:43.429205    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:43.429205    7488 round_trippers.go:587]     Audit-Id: 8a3f782f-78ed-4cc8-a656-888df7d51dce
	I0409 01:14:43.429281    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:43.429281    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:43.429281    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:43.429281    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:43.429281    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:43 GMT
	I0409 01:14:43.429512    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:43.919914    7488 type.go:168] "Request Body" body=""
	I0409 01:14:43.920065    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:43.920065    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:43.920065    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:43.920065    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:43.924207    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:43.924207    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:43.924207    7488 round_trippers.go:587]     Audit-Id: 83dd3892-d127-4069-82ed-12175a79050a
	I0409 01:14:43.924207    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:43.924326    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:43.924326    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:43.924326    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:43.924326    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:43 GMT
	I0409 01:14:43.925149    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:43.925334    7488 type.go:168] "Request Body" body=""
	I0409 01:14:43.925334    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:43.925334    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:43.925334    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:43.925334    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:43.927162    7488 round_trippers.go:581] Response Status: 200 OK in 1 milliseconds
	I0409 01:14:43.928091    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:43.928091    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:43.928091    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:43.928091    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:43.928183    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:43 GMT
	I0409 01:14:43.928183    7488 round_trippers.go:587]     Audit-Id: 3f9742bd-9e52-4b07-99c3-adc043a7287b
	I0409 01:14:43.928183    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:43.928521    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:43.928619    7488 pod_ready.go:103] pod "kube-controller-manager-multinode-611500" in "kube-system" namespace has status "Ready":"False"
	I0409 01:14:44.419618    7488 type.go:168] "Request Body" body=""
	I0409 01:14:44.419618    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:44.419618    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:44.419618    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:44.419618    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:44.423962    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:44.423962    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:44.423962    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:44.423962    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:44.423962    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:44.423962    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:44 GMT
	I0409 01:14:44.423962    7488 round_trippers.go:587]     Audit-Id: 18fdad6b-3537-432a-9778-69ab0fcc589e
	I0409 01:14:44.423962    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:44.425859    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b6 33 0a d6 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 37 33 38 00 42 08  |ec96062.19738.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31599 chars]
	 >
	I0409 01:14:44.426000    7488 type.go:168] "Request Body" body=""
	I0409 01:14:44.426000    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:44.426000    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:44.426000    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:44.426000    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:44.429847    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:44.430535    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:44.430535    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:44 GMT
	I0409 01:14:44.430535    7488 round_trippers.go:587]     Audit-Id: 68e4aa3e-6d60-448c-b71b-d96ee7e78ee6
	I0409 01:14:44.430535    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:44.430535    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:44.430535    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:44.430535    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:44.431135    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:44.918844    7488 type.go:168] "Request Body" body=""
	I0409 01:14:44.918844    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-611500
	I0409 01:14:44.918844    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:44.918844    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:44.918844    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:44.924674    7488 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0409 01:14:44.924780    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:44.924780    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:44.924780    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:44.924780    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:44.924780    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:44 GMT
	I0409 01:14:44.924780    7488 round_trippers.go:587]     Audit-Id: a2fb0a9d-377f-48d2-b721-a9fe6e80d937
	I0409 01:14:44.924840    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:44.925001    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  e4 31 0a 9c 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.1....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 36 31 31 35 30 30 12  |ultinode-611500.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 37 35 61 66 30 62  39 30 2d 36 63 37 32 2d  |*$75af0b90-6c72-|
		00000060  34 36 32 34 2d 38 36 36  30 2d 61 61 39 34 33 66  |4624-8660-aa943f|
		00000070  65 63 39 36 30 36 32 04  31 39 38 38 38 00 42 08  |ec96062.19888.B.|
		00000080  08 90 88 d7 bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 30570 chars]
	 >
	I0409 01:14:44.925823    7488 type.go:168] "Request Body" body=""
	I0409 01:14:44.925823    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:44.925935    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:44.925935    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:44.925935    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:44.929291    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:44.929491    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:44.929491    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:44.929491    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:44.929491    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:44.929491    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:44 GMT
	I0409 01:14:44.929491    7488 round_trippers.go:587]     Audit-Id: c5ceb222-6933-465c-a2c6-2433e4349138
	I0409 01:14:44.929491    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:44.929491    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:44.929491    7488 pod_ready.go:93] pod "kube-controller-manager-multinode-611500" in "kube-system" namespace has status "Ready":"True"
	I0409 01:14:44.929491    7488 pod_ready.go:82] duration metric: took 12.5108216s for pod "kube-controller-manager-multinode-611500" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:44.929491    7488 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bhjnx" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:44.930024    7488 type.go:168] "Request Body" body=""
	I0409 01:14:44.930167    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bhjnx
	I0409 01:14:44.930167    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:44.930167    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:44.930167    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:44.935246    7488 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0409 01:14:44.935246    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:44.935246    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:44.935246    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:44.935246    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:44.935246    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:44.935246    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:44 GMT
	I0409 01:14:44.935246    7488 round_trippers.go:587]     Audit-Id: 37899556-4b16-40bf-9c08-a0d91019d95f
	I0409 01:14:44.935975    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  af 25 0a c1 14 0a 10 6b  75 62 65 2d 70 72 6f 78  |.%.....kube-prox|
		00000020  79 2d 62 68 6a 6e 78 12  0b 6b 75 62 65 2d 70 72  |y-bhjnx..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 61 66 62  36 64 61 39 39 2d 64 65  |m".*$afb6da99-de|
		00000050  39 39 2d 34 39 63 34 2d  62 30 38 30 2d 38 35 30  |99-49c4-b080-850|
		00000060  30 62 34 62 30 38 64 39  62 32 03 36 32 35 38 00  |0b4b08d9b2.6258.|
		00000070  42 08 08 d1 89 d7 bf 06  10 00 5a 26 0a 18 63 6f  |B.........Z&..co|
		00000080  6e 74 72 6f 6c 6c 65 72  2d 72 65 76 69 73 69 6f  |ntroller-revisio|
		00000090  6e 2d 68 61 73 68 12 0a  37 62 62 38 34 63 34 39  |n-hash..7bb84c49|
		000000a0  38 34 5a 15 0a 07 6b 38  73 2d 61 70 70 12 0a 6b  |84Z...k8s-app..k|
		000000b0  75 62 65 2d 70 72 6f 78  79 5a 1c 0a 17 70 6f 64  |ube-proxyZ...pod|
		000000c0  2d 74 65 6d 70 6c 61 74  65 2d 67 65 6e 65 72 61  |-template-gener [truncated 22744 chars]
	 >
	I0409 01:14:44.935975    7488 type.go:168] "Request Body" body=""
	I0409 01:14:44.935975    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500-m02
	I0409 01:14:44.935975    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:44.935975    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:44.935975    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:44.938369    7488 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 01:14:44.938369    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:44.938369    7488 round_trippers.go:587]     Audit-Id: f2d09414-548b-4455-8dc7-5f0939635475
	I0409 01:14:44.938369    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:44.938369    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:44.938369    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:44.938369    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:44.938369    7488 round_trippers.go:587]     Content-Length: 3466
	I0409 01:14:44.938369    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:44 GMT
	I0409 01:14:44.939427    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 f3 1a 0a b0 0f 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 32 12 00 1a 00  |e-611500-m02....|
		00000030  22 00 2a 24 31 34 63 36  35 39 31 30 2d 30 30 62  |".*$14c65910-00b|
		00000040  38 2d 34 39 31 34 2d 62  63 32 36 2d 31 38 65 37  |8-4914-bc26-18e7|
		00000050  62 64 33 39 66 61 66 33  32 04 31 37 37 34 38 00  |bd39faf32.17748.|
		00000060  42 08 08 d1 89 d7 bf 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 16113 chars]
	 >
	I0409 01:14:44.939427    7488 pod_ready.go:93] pod "kube-proxy-bhjnx" in "kube-system" namespace has status "Ready":"True"
	I0409 01:14:44.939427    7488 pod_ready.go:82] duration metric: took 9.9362ms for pod "kube-proxy-bhjnx" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:44.939427    7488 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xnh8p" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:44.939427    7488 type.go:168] "Request Body" body=""
	I0409 01:14:44.939427    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xnh8p
	I0409 01:14:44.939427    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:44.939427    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:44.939427    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:44.942108    7488 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 01:14:44.942108    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:44.942108    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:44 GMT
	I0409 01:14:44.942108    7488 round_trippers.go:587]     Audit-Id: 76b32b12-8b9f-4747-8587-6309e900ebd7
	I0409 01:14:44.942108    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:44.942108    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:44.942108    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:44.942108    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:44.943265    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b4 26 0a c5 15 0a 10 6b  75 62 65 2d 70 72 6f 78  |.&.....kube-prox|
		00000020  79 2d 78 6e 68 38 70 12  0b 6b 75 62 65 2d 70 72  |y-xnh8p..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 65 64 38  65 39 34 34 65 2d 65 37  |m".*$ed8e944e-e7|
		00000050  33 64 2d 34 34 34 63 2d  62 31 65 65 2d 64 37 31  |3d-444c-b1ee-d71|
		00000060  35 35 63 37 37 31 63 39  36 32 04 31 38 31 31 38  |55c771c962.18118|
		00000070  00 42 08 08 f5 8b d7 bf  06 10 00 5a 26 0a 18 63  |.B.........Z&..c|
		00000080  6f 6e 74 72 6f 6c 6c 65  72 2d 72 65 76 69 73 69  |ontroller-revisi|
		00000090  6f 6e 2d 68 61 73 68 12  0a 37 62 62 38 34 63 34  |on-hash..7bb84c4|
		000000a0  39 38 34 5a 15 0a 07 6b  38 73 2d 61 70 70 12 0a  |984Z...k8s-app..|
		000000b0  6b 75 62 65 2d 70 72 6f  78 79 5a 1c 0a 17 70 6f  |kube-proxyZ...po|
		000000c0  64 2d 74 65 6d 70 6c 61  74 65 2d 67 65 6e 65 72  |d-template-gene [truncated 23381 chars]
	 >
	I0409 01:14:44.943327    7488 type.go:168] "Request Body" body=""
	I0409 01:14:44.943327    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500-m03
	I0409 01:14:44.943327    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:44.943327    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:44.943327    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:44.947005    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:44.947005    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:44.947469    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:44.947469    7488 round_trippers.go:587]     Content-Length: 3885
	I0409 01:14:44.947469    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:44 GMT
	I0409 01:14:44.947469    7488 round_trippers.go:587]     Audit-Id: 786a32a3-bbd7-4372-920e-aa866bf04237
	I0409 01:14:44.947469    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:44.947510    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:44.947510    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:44.947807    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 96 1e 0a eb 12 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 36 31 31 35 30 30  2d 6d 30 33 12 00 1a 00  |e-611500-m03....|
		00000030  22 00 2a 24 38 63 66 33  37 34 64 36 2d 31 66 62  |".*$8cf374d6-1fb|
		00000040  30 2d 34 30 36 38 2d 39  62 66 39 2d 30 62 32 37  |0-4068-9bf9-0b27|
		00000050  61 34 32 61 63 66 34 39  32 04 31 39 38 33 38 00  |a42acf492.19838.|
		00000060  42 08 08 a0 91 d7 bf 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 18170 chars]
	 >
	I0409 01:14:44.947807    7488 pod_ready.go:98] node "multinode-611500-m03" hosting pod "kube-proxy-xnh8p" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-611500-m03" has status "Ready":"Unknown"
	I0409 01:14:44.947807    7488 pod_ready.go:82] duration metric: took 8.3796ms for pod "kube-proxy-xnh8p" in "kube-system" namespace to be "Ready" ...
	E0409 01:14:44.947807    7488 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-611500-m03" hosting pod "kube-proxy-xnh8p" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-611500-m03" has status "Ready":"Unknown"
	I0409 01:14:44.947807    7488 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zxxgf" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:44.947807    7488 type.go:168] "Request Body" body=""
	I0409 01:14:44.947807    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zxxgf
	I0409 01:14:44.947807    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:44.947807    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:44.947807    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:44.951574    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:44.951574    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:44.951574    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:44.951574    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:44.951574    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:44.951574    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:44.952036    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:44 GMT
	I0409 01:14:44.952036    7488 round_trippers.go:587]     Audit-Id: ad40fb34-ba59-45b5-8d42-a11a9eb73753
	I0409 01:14:44.953016    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  96 26 0a c2 14 0a 10 6b  75 62 65 2d 70 72 6f 78  |.&.....kube-prox|
		00000020  79 2d 7a 78 78 67 66 12  0b 6b 75 62 65 2d 70 72  |y-zxxgf..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 33 35 30  36 65 65 65 37 2d 64 39  |m".*$3506eee7-d9|
		00000050  34 36 2d 34 64 64 65 2d  39 31 63 39 2d 39 66 63  |46-4dde-91c9-9fc|
		00000060  35 63 31 34 37 34 34 33  34 32 04 31 39 33 32 38  |5c14744342.19328|
		00000070  00 42 08 08 96 88 d7 bf  06 10 00 5a 26 0a 18 63  |.B.........Z&..c|
		00000080  6f 6e 74 72 6f 6c 6c 65  72 2d 72 65 76 69 73 69  |ontroller-revisi|
		00000090  6f 6e 2d 68 61 73 68 12  0a 37 62 62 38 34 63 34  |on-hash..7bb84c4|
		000000a0  39 38 34 5a 15 0a 07 6b  38 73 2d 61 70 70 12 0a  |984Z...k8s-app..|
		000000b0  6b 75 62 65 2d 70 72 6f  78 79 5a 1c 0a 17 70 6f  |kube-proxyZ...po|
		000000c0  64 2d 74 65 6d 70 6c 61  74 65 2d 67 65 6e 65 72  |d-template-gene [truncated 23225 chars]
	 >
	I0409 01:14:44.953285    7488 type.go:168] "Request Body" body=""
	I0409 01:14:44.953358    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:44.953358    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:44.953408    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:44.953408    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:44.956009    7488 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0409 01:14:44.956058    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:44.956058    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:44 GMT
	I0409 01:14:44.956132    7488 round_trippers.go:587]     Audit-Id: 5aea1c8c-17b9-4705-b4b1-4fee2f869a28
	I0409 01:14:44.956132    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:44.956132    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:44.956132    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:44.956132    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:44.956958    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:44.957009    7488 pod_ready.go:93] pod "kube-proxy-zxxgf" in "kube-system" namespace has status "Ready":"True"
	I0409 01:14:44.957009    7488 pod_ready.go:82] duration metric: took 9.2013ms for pod "kube-proxy-zxxgf" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:44.957009    7488 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-611500" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:44.957009    7488 type.go:168] "Request Body" body=""
	I0409 01:14:44.957009    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-611500
	I0409 01:14:44.957009    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:44.957009    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:44.957009    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:44.962375    7488 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0409 01:14:44.962375    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:44.962375    7488 round_trippers.go:587]     Audit-Id: a774ce8c-3a2d-4734-bea5-14e99139eec1
	I0409 01:14:44.962375    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:44.962375    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:44.962375    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:44.962375    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:44.962375    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:44 GMT
	I0409 01:14:44.963044    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  ef 23 0a 84 18 0a 1f 6b  75 62 65 2d 73 63 68 65  |.#.....kube-sche|
		00000020  64 75 6c 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |duler-multinode-|
		00000030  36 31 31 35 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |611500....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 39 31 38 35 64 35 63  |ystem".*$9185d5c|
		00000050  30 2d 62 32 38 61 2d 34  33 38 63 2d 62 30 35 61  |0-b28a-438c-b05a|
		00000060  2d 36 34 36 36 37 65 34  61 63 33 64 37 32 04 31  |-64667e4ac3d72.1|
		00000070  38 35 33 38 00 42 08 08  90 88 d7 bf 06 10 00 5a  |8538.B.........Z|
		00000080  1b 0a 09 63 6f 6d 70 6f  6e 65 6e 74 12 0e 6b 75  |...component..ku|
		00000090  62 65 2d 73 63 68 65 64  75 6c 65 72 5a 15 0a 04  |be-schedulerZ...|
		000000a0  74 69 65 72 12 0d 63 6f  6e 74 72 6f 6c 2d 70 6c  |tier..control-pl|
		000000b0  61 6e 65 62 3d 0a 19 6b  75 62 65 72 6e 65 74 65  |aneb=..kubernete|
		000000c0  73 2e 69 6f 2f 63 6f 6e  66 69 67 2e 68 61 73 68  |s.io/config.has [truncated 21796 chars]
	 >
	I0409 01:14:44.963327    7488 type.go:168] "Request Body" body=""
	I0409 01:14:44.963327    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes/multinode-611500
	I0409 01:14:44.963327    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:44.963327    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:44.963327    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:44.968296    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:44.968354    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:44.968354    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:44.968354    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:44.968465    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:44.968465    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:44.968465    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:44 GMT
	I0409 01:14:44.968500    7488 round_trippers.go:587]     Audit-Id: 65042669-7c4a-4699-b4b3-25285f535fe2
	I0409 01:14:44.968773    7488 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d5 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 36 31 31 35 30 30  12 00 1a 00 22 00 2a 24  |e-611500....".*$|
		00000030  62 31 32 35 32 66 34 61  2d 32 32 33 30 2d 34 36  |b1252f4a-2230-46|
		00000040  61 36 2d 39 33 38 62 2d  37 63 30 37 31 31 31 33  |a6-938b-7c071113|
		00000050  33 34 32 34 32 04 31 39  35 39 38 00 42 08 08 8d  |34242.19598.B...|
		00000060  88 d7 bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22277 chars]
	 >
	I0409 01:14:44.968773    7488 pod_ready.go:93] pod "kube-scheduler-multinode-611500" in "kube-system" namespace has status "Ready":"True"
	I0409 01:14:44.968773    7488 pod_ready.go:82] duration metric: took 11.7646ms for pod "kube-scheduler-multinode-611500" in "kube-system" namespace to be "Ready" ...
	I0409 01:14:44.968773    7488 pod_ready.go:39] duration metric: took 16.5871816s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0409 01:14:44.968773    7488 api_server.go:52] waiting for apiserver process to appear ...
	I0409 01:14:44.981879    7488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0409 01:14:45.013834    7488 command_runner.go:130] > 2024
	I0409 01:14:45.013970    7488 api_server.go:72] duration metric: took 28.999296s to wait for apiserver process to appear ...
	I0409 01:14:45.013970    7488 api_server.go:88] waiting for apiserver healthz status ...
	I0409 01:14:45.014026    7488 api_server.go:253] Checking apiserver healthz at https://192.168.120.172:8443/healthz ...
	I0409 01:14:45.021850    7488 api_server.go:279] https://192.168.120.172:8443/healthz returned 200:
	ok
	I0409 01:14:45.021850    7488 discovery_client.go:658] "Request Body" body=""
	I0409 01:14:45.021850    7488 round_trippers.go:470] GET https://192.168.120.172:8443/version
	I0409 01:14:45.021850    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:45.021850    7488 round_trippers.go:480]     Accept: application/json, */*
	I0409 01:14:45.021850    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:45.024857    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:45.024881    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:45.024881    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:45.024881    7488 round_trippers.go:587]     Content-Type: application/json
	I0409 01:14:45.024972    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:45.024972    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:45.024972    7488 round_trippers.go:587]     Content-Length: 263
	I0409 01:14:45.024972    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:45 GMT
	I0409 01:14:45.024972    7488 round_trippers.go:587]     Audit-Id: e905e7ce-9570-47e1-92ac-368bd324818a
	I0409 01:14:45.025040    7488 discovery_client.go:658] "Response Body" body=<
		{
		  "major": "1",
		  "minor": "32",
		  "gitVersion": "v1.32.2",
		  "gitCommit": "67a30c0adcf52bd3f56ff0893ce19966be12991f",
		  "gitTreeState": "clean",
		  "buildDate": "2025-02-12T21:19:47Z",
		  "goVersion": "go1.23.6",
		  "compiler": "gc",
		  "platform": "linux/amd64"
		}
	 >
	I0409 01:14:45.025156    7488 api_server.go:141] control plane version: v1.32.2
	I0409 01:14:45.025182    7488 api_server.go:131] duration metric: took 11.2114ms to wait for apiserver health ...
	I0409 01:14:45.025215    7488 system_pods.go:43] waiting for kube-system pods to appear ...
	I0409 01:14:45.025239    7488 type.go:204] "Request Body" body=""
	I0409 01:14:45.119059    7488 request.go:661] Waited for 93.7179ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods
	I0409 01:14:45.119292    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods
	I0409 01:14:45.119292    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:45.119292    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:45.119292    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:45.124200    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:45.124284    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:45.124284    7488 round_trippers.go:587]     Audit-Id: 590cddcd-539b-4788-be4a-345f623f9937
	I0409 01:14:45.124376    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:45.124376    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:45.124376    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:45.124376    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:45.124497    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:45 GMT
	I0409 01:14:45.127632    7488 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 a2 eb 03 0a  0a 0a 00 12 04 31 39 38  |ist..........198|
		00000020  38 1a 00 12 c7 28 0a af  19 0a 18 63 6f 72 65 64  |8....(.....cored|
		00000030  6e 73 2d 36 36 38 64 36  62 66 39 62 63 2d 64 35  |ns-668d6bf9bc-d5|
		00000040  34 73 34 12 13 63 6f 72  65 64 6e 73 2d 36 36 38  |4s4..coredns-668|
		00000050  64 36 62 66 39 62 63 2d  1a 0b 6b 75 62 65 2d 73  |d6bf9bc-..kube-s|
		00000060  79 73 74 65 6d 22 00 2a  24 31 32 34 33 31 66 32  |ystem".*$12431f2|
		00000070  37 2d 37 65 34 65 2d 34  31 63 39 2d 38 64 35 34  |7-7e4e-41c9-8d54|
		00000080  2d 62 63 37 62 65 32 30  37 34 62 39 63 32 04 31  |-bc7be2074b9c2.1|
		00000090  39 37 36 38 00 42 08 08  96 88 d7 bf 06 10 00 5a  |9768.B.........Z|
		000000a0  13 0a 07 6b 38 73 2d 61  70 70 12 08 6b 75 62 65  |...k8s-app..kube|
		000000b0  2d 64 6e 73 5a 1f 0a 11  70 6f 64 2d 74 65 6d 70  |-dnsZ...pod-temp|
		000000c0  6c 61 74 65 2d 68 61 73  68 12 0a 36 36 38 64 36  |late-hash..668d [truncated 309601 chars]
	 >
	I0409 01:14:45.128474    7488 system_pods.go:59] 12 kube-system pods found
	I0409 01:14:45.128606    7488 system_pods.go:61] "coredns-668d6bf9bc-d54s4" [12431f27-7e4e-41c9-8d54-bc7be2074b9c] Running
	I0409 01:14:45.128606    7488 system_pods.go:61] "etcd-multinode-611500" [e6b39b1a-a6d5-46d1-a56a-243c9bb6f563] Running
	I0409 01:14:45.128606    7488 system_pods.go:61] "kindnet-66fr6" [3127adff-6b68-4ae6-8fea-cbee940bb9df] Running
	I0409 01:14:45.128606    7488 system_pods.go:61] "kindnet-v66j5" [9200b124-3c4b-442b-99fd-49ccc2faf534] Running
	I0409 01:14:45.128606    7488 system_pods.go:61] "kindnet-vntlr" [2e088361-08c9-4325-8241-20f5f443dcf6] Running
	I0409 01:14:45.128606    7488 system_pods.go:61] "kube-apiserver-multinode-611500" [f9924754-f8c5-4a8b-9da2-23d8096a5ecf] Running
	I0409 01:14:45.128606    7488 system_pods.go:61] "kube-controller-manager-multinode-611500" [75af0b90-6c72-4624-8660-aa943fec9606] Running
	I0409 01:14:45.128606    7488 system_pods.go:61] "kube-proxy-bhjnx" [afb6da99-de99-49c4-b080-8500b4b08d9b] Running
	I0409 01:14:45.128606    7488 system_pods.go:61] "kube-proxy-xnh8p" [ed8e944e-e73d-444c-b1ee-d7155c771c96] Running
	I0409 01:14:45.128678    7488 system_pods.go:61] "kube-proxy-zxxgf" [3506eee7-d946-4dde-91c9-9fc5c1474434] Running
	I0409 01:14:45.128714    7488 system_pods.go:61] "kube-scheduler-multinode-611500" [9185d5c0-b28a-438c-b05a-64667e4ac3d7] Running
	I0409 01:14:45.128714    7488 system_pods.go:61] "storage-provisioner" [8f7ea37f-c3a7-44fc-ac99-c184b674aca3] Running
	I0409 01:14:45.128714    7488 system_pods.go:74] duration metric: took 103.4738ms to wait for pod list to return data ...
	I0409 01:14:45.128714    7488 default_sa.go:34] waiting for default service account to be created ...
	I0409 01:14:45.128838    7488 type.go:204] "Request Body" body=""
	I0409 01:14:45.319579    7488 request.go:661] Waited for 190.7394ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.120.172:8443/api/v1/namespaces/default/serviceaccounts
	I0409 01:14:45.319803    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/default/serviceaccounts
	I0409 01:14:45.319803    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:45.319803    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:45.319803    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:45.323726    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:45.323726    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:45.323824    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:45.323824    7488 round_trippers.go:587]     Content-Length: 129
	I0409 01:14:45.323824    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:45 GMT
	I0409 01:14:45.323824    7488 round_trippers.go:587]     Audit-Id: d60fc6c7-cea1-4b35-87ed-4038ee20c28d
	I0409 01:14:45.323824    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:45.323824    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:45.323824    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:45.323910    7488 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 18 0a 02  76 31 12 12 53 65 72 76  |k8s.....v1..Serv|
		00000010  69 63 65 41 63 63 6f 75  6e 74 4c 69 73 74 12 5d  |iceAccountList.]|
		00000020  0a 0a 0a 00 12 04 31 39  38 38 1a 00 12 4f 0a 4d  |......1988...O.M|
		00000030  0a 07 64 65 66 61 75 6c  74 12 00 1a 07 64 65 66  |..default....def|
		00000040  61 75 6c 74 22 00 2a 24  35 65 63 37 63 31 66 66  |ault".*$5ec7c1ff|
		00000050  2d 31 63 66 31 2d 34 64  30 32 2d 38 61 65 33 2d  |-1cf1-4d02-8ae3-|
		00000060  35 62 66 35 65 30 39 65  66 33 37 37 32 03 33 32  |5bf5e09ef3772.32|
		00000070  36 38 00 42 08 08 95 88  d7 bf 06 10 00 1a 00 22  |68.B..........."|
		00000080  00                                                |.|
	 >
	I0409 01:14:45.324017    7488 default_sa.go:45] found service account: "default"
	I0409 01:14:45.324017    7488 default_sa.go:55] duration metric: took 195.3007ms for default service account to be created ...
	I0409 01:14:45.324017    7488 system_pods.go:116] waiting for k8s-apps to be running ...
	I0409 01:14:45.324017    7488 type.go:204] "Request Body" body=""
	I0409 01:14:45.519813    7488 request.go:661] Waited for 195.7941ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods
	I0409 01:14:45.520273    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/namespaces/kube-system/pods
	I0409 01:14:45.520273    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:45.520273    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:45.520273    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:45.524962    7488 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0409 01:14:45.525009    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:45.525009    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:45 GMT
	I0409 01:14:45.525009    7488 round_trippers.go:587]     Audit-Id: aa927236-d6b0-4d33-9b82-f63c212cd579
	I0409 01:14:45.525009    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:45.525009    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:45.525009    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:45.525009    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:45.528326    7488 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 a2 eb 03 0a  0a 0a 00 12 04 31 39 38  |ist..........198|
		00000020  38 1a 00 12 c7 28 0a af  19 0a 18 63 6f 72 65 64  |8....(.....cored|
		00000030  6e 73 2d 36 36 38 64 36  62 66 39 62 63 2d 64 35  |ns-668d6bf9bc-d5|
		00000040  34 73 34 12 13 63 6f 72  65 64 6e 73 2d 36 36 38  |4s4..coredns-668|
		00000050  64 36 62 66 39 62 63 2d  1a 0b 6b 75 62 65 2d 73  |d6bf9bc-..kube-s|
		00000060  79 73 74 65 6d 22 00 2a  24 31 32 34 33 31 66 32  |ystem".*$12431f2|
		00000070  37 2d 37 65 34 65 2d 34  31 63 39 2d 38 64 35 34  |7-7e4e-41c9-8d54|
		00000080  2d 62 63 37 62 65 32 30  37 34 62 39 63 32 04 31  |-bc7be2074b9c2.1|
		00000090  39 37 36 38 00 42 08 08  96 88 d7 bf 06 10 00 5a  |9768.B.........Z|
		000000a0  13 0a 07 6b 38 73 2d 61  70 70 12 08 6b 75 62 65  |...k8s-app..kube|
		000000b0  2d 64 6e 73 5a 1f 0a 11  70 6f 64 2d 74 65 6d 70  |-dnsZ...pod-temp|
		000000c0  6c 61 74 65 2d 68 61 73  68 12 0a 36 36 38 64 36  |late-hash..668d [truncated 309601 chars]
	 >
	I0409 01:14:45.529089    7488 system_pods.go:86] 12 kube-system pods found
	I0409 01:14:45.529152    7488 system_pods.go:89] "coredns-668d6bf9bc-d54s4" [12431f27-7e4e-41c9-8d54-bc7be2074b9c] Running
	I0409 01:14:45.529152    7488 system_pods.go:89] "etcd-multinode-611500" [e6b39b1a-a6d5-46d1-a56a-243c9bb6f563] Running
	I0409 01:14:45.529152    7488 system_pods.go:89] "kindnet-66fr6" [3127adff-6b68-4ae6-8fea-cbee940bb9df] Running
	I0409 01:14:45.529152    7488 system_pods.go:89] "kindnet-v66j5" [9200b124-3c4b-442b-99fd-49ccc2faf534] Running
	I0409 01:14:45.529234    7488 system_pods.go:89] "kindnet-vntlr" [2e088361-08c9-4325-8241-20f5f443dcf6] Running
	I0409 01:14:45.529234    7488 system_pods.go:89] "kube-apiserver-multinode-611500" [f9924754-f8c5-4a8b-9da2-23d8096a5ecf] Running
	I0409 01:14:45.529234    7488 system_pods.go:89] "kube-controller-manager-multinode-611500" [75af0b90-6c72-4624-8660-aa943fec9606] Running
	I0409 01:14:45.529234    7488 system_pods.go:89] "kube-proxy-bhjnx" [afb6da99-de99-49c4-b080-8500b4b08d9b] Running
	I0409 01:14:45.529234    7488 system_pods.go:89] "kube-proxy-xnh8p" [ed8e944e-e73d-444c-b1ee-d7155c771c96] Running
	I0409 01:14:45.529234    7488 system_pods.go:89] "kube-proxy-zxxgf" [3506eee7-d946-4dde-91c9-9fc5c1474434] Running
	I0409 01:14:45.529234    7488 system_pods.go:89] "kube-scheduler-multinode-611500" [9185d5c0-b28a-438c-b05a-64667e4ac3d7] Running
	I0409 01:14:45.529288    7488 system_pods.go:89] "storage-provisioner" [8f7ea37f-c3a7-44fc-ac99-c184b674aca3] Running
	I0409 01:14:45.529288    7488 system_pods.go:126] duration metric: took 205.2683ms to wait for k8s-apps to be running ...
	I0409 01:14:45.529320    7488 system_svc.go:44] waiting for kubelet service to be running ....
	I0409 01:14:45.538883    7488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0409 01:14:45.567550    7488 system_svc.go:56] duration metric: took 38.2302ms WaitForService to wait for kubelet
	I0409 01:14:45.567550    7488 kubeadm.go:582] duration metric: took 29.5530053s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0409 01:14:45.567550    7488 node_conditions.go:102] verifying NodePressure condition ...
	I0409 01:14:45.567550    7488 type.go:204] "Request Body" body=""
	I0409 01:14:45.719878    7488 request.go:661] Waited for 152.3256ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.120.172:8443/api/v1/nodes
	I0409 01:14:45.720215    7488 round_trippers.go:470] GET https://192.168.120.172:8443/api/v1/nodes
	I0409 01:14:45.720215    7488 round_trippers.go:476] Request Headers:
	I0409 01:14:45.720215    7488 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0409 01:14:45.720215    7488 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0409 01:14:45.723687    7488 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0409 01:14:45.723687    7488 round_trippers.go:584] Response Headers:
	I0409 01:14:45.723687    7488 round_trippers.go:587]     Audit-Id: fe63e8a5-42f1-4590-8d1f-b24e839c954a
	I0409 01:14:45.723687    7488 round_trippers.go:587]     Cache-Control: no-cache, private
	I0409 01:14:45.723687    7488 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0409 01:14:45.723687    7488 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 141c83c4-ee66-42c6-9feb-4766c1576bd3
	I0409 01:14:45.723687    7488 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: a1fddc11-4f62-49c5-8983-11f6d9b85b23
	I0409 01:14:45.723687    7488 round_trippers.go:587]     Date: Wed, 09 Apr 2025 01:14:45 GMT
	I0409 01:14:45.724465    7488 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0e 0a 02  76 31 12 08 4e 6f 64 65  |k8s.....v1..Node|
		00000010  4c 69 73 74 12 f3 5d 0a  0a 0a 00 12 04 31 39 38  |List..]......198|
		00000020  38 1a 00 12 d5 24 0a f8  11 0a 10 6d 75 6c 74 69  |8....$.....multi|
		00000030  6e 6f 64 65 2d 36 31 31  35 30 30 12 00 1a 00 22  |node-611500...."|
		00000040  00 2a 24 62 31 32 35 32  66 34 61 2d 32 32 33 30  |.*$b1252f4a-2230|
		00000050  2d 34 36 61 36 2d 39 33  38 62 2d 37 63 30 37 31  |-46a6-938b-7c071|
		00000060  31 31 33 33 34 32 34 32  04 31 39 35 39 38 00 42  |11334242.19598.B|
		00000070  08 08 8d 88 d7 bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000080  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000090  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		000000a0  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000b0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000c0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/ar [truncated 58461 chars]
	 >
	I0409 01:14:45.724799    7488 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0409 01:14:45.724856    7488 node_conditions.go:123] node cpu capacity is 2
	I0409 01:14:45.724856    7488 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0409 01:14:45.724856    7488 node_conditions.go:123] node cpu capacity is 2
	I0409 01:14:45.724969    7488 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0409 01:14:45.724969    7488 node_conditions.go:123] node cpu capacity is 2
	I0409 01:14:45.724969    7488 node_conditions.go:105] duration metric: took 157.4164ms to run NodePressure ...
	I0409 01:14:45.724969    7488 start.go:241] waiting for startup goroutines ...
	I0409 01:14:45.724969    7488 start.go:246] waiting for cluster config update ...
	I0409 01:14:45.725064    7488 start.go:255] writing updated cluster config ...
	I0409 01:14:45.730329    7488 out.go:201] 
	I0409 01:14:45.733583    7488 config.go:182] Loaded profile config "ha-061400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0409 01:14:45.747495    7488 config.go:182] Loaded profile config "multinode-611500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0409 01:14:45.747495    7488 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\config.json ...
	I0409 01:14:45.756344    7488 out.go:177] * Starting "multinode-611500-m02" worker node in "multinode-611500" cluster
	I0409 01:14:45.758852    7488 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0409 01:14:45.758852    7488 cache.go:56] Caching tarball of preloaded images
	I0409 01:14:45.759628    7488 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0409 01:14:45.760058    7488 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0409 01:14:45.760058    7488 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\config.json ...
	I0409 01:14:45.762862    7488 start.go:360] acquireMachinesLock for multinode-611500-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0409 01:14:45.763089    7488 start.go:364] duration metric: took 125.4µs to acquireMachinesLock for "multinode-611500-m02"
	I0409 01:14:45.763252    7488 start.go:96] Skipping create...Using existing machine configuration
	I0409 01:14:45.763252    7488 fix.go:54] fixHost starting: m02
	I0409 01:14:45.763887    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 01:14:47.936148    7488 main.go:141] libmachine: [stdout =====>] : Off
	
	I0409 01:14:47.936148    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:14:47.936148    7488 fix.go:112] recreateIfNeeded on multinode-611500-m02: state=Stopped err=<nil>
	W0409 01:14:47.936148    7488 fix.go:138] unexpected machine state, will restart: <nil>
	I0409 01:14:47.940871    7488 out.go:177] * Restarting existing hyperv VM for "multinode-611500-m02" ...
	I0409 01:14:47.943633    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-611500-m02
	I0409 01:14:51.034142    7488 main.go:141] libmachine: [stdout =====>] : 
	I0409 01:14:51.034142    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:14:51.034142    7488 main.go:141] libmachine: Waiting for host to start...
	I0409 01:14:51.035180    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 01:14:53.339184    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:14:53.339184    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:14:53.339781    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 01:14:55.843529    7488 main.go:141] libmachine: [stdout =====>] : 
	I0409 01:14:55.843529    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:14:56.844294    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 01:14:59.094874    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:14:59.094874    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:14:59.094874    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 01:15:01.722254    7488 main.go:141] libmachine: [stdout =====>] : 
	I0409 01:15:01.722254    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:02.722943    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 01:15:04.920427    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:15:04.920427    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:04.920427    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 01:15:07.494375    7488 main.go:141] libmachine: [stdout =====>] : 
	I0409 01:15:07.495062    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:08.496101    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 01:15:10.692006    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:15:10.692006    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:10.692006    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 01:15:13.201103    7488 main.go:141] libmachine: [stdout =====>] : 
	I0409 01:15:13.201103    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:14.203172    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 01:15:16.436209    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:15:16.436209    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:16.436717    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 01:15:19.030179    7488 main.go:141] libmachine: [stdout =====>] : 192.168.114.152
	
	I0409 01:15:19.030179    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:19.034265    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 01:15:21.152735    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:15:21.152735    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:21.152735    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 01:15:23.678416    7488 main.go:141] libmachine: [stdout =====>] : 192.168.114.152
	
	I0409 01:15:23.678486    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:23.678600    7488 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-611500\config.json ...
	I0409 01:15:23.681530    7488 machine.go:93] provisionDockerMachine start ...
	I0409 01:15:23.681530    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 01:15:25.832645    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:15:25.832645    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:25.832727    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 01:15:28.357988    7488 main.go:141] libmachine: [stdout =====>] : 192.168.114.152
	
	I0409 01:15:28.357988    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:28.364553    7488 main.go:141] libmachine: Using SSH client type: native
	I0409 01:15:28.365311    7488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.114.152 22 <nil> <nil>}
	I0409 01:15:28.365311    7488 main.go:141] libmachine: About to run SSH command:
	hostname
	I0409 01:15:28.514656    7488 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0409 01:15:28.514656    7488 buildroot.go:166] provisioning hostname "multinode-611500-m02"
	I0409 01:15:28.514656    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 01:15:30.731578    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:15:30.731578    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:30.731578    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 01:15:33.270811    7488 main.go:141] libmachine: [stdout =====>] : 192.168.114.152
	
	I0409 01:15:33.271368    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:33.278535    7488 main.go:141] libmachine: Using SSH client type: native
	I0409 01:15:33.278535    7488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.114.152 22 <nil> <nil>}
	I0409 01:15:33.278535    7488 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-611500-m02 && echo "multinode-611500-m02" | sudo tee /etc/hostname
	I0409 01:15:33.447119    7488 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-611500-m02
	
	I0409 01:15:33.447196    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 01:15:35.551098    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:15:35.551636    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:35.551817    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 01:15:38.097233    7488 main.go:141] libmachine: [stdout =====>] : 192.168.114.152
	
	I0409 01:15:38.097233    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:38.103092    7488 main.go:141] libmachine: Using SSH client type: native
	I0409 01:15:38.104072    7488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.114.152 22 <nil> <nil>}
	I0409 01:15:38.104072    7488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-611500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-611500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-611500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0409 01:15:38.268677    7488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0409 01:15:38.268677    7488 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0409 01:15:38.268825    7488 buildroot.go:174] setting up certificates
	I0409 01:15:38.268825    7488 provision.go:84] configureAuth start
	I0409 01:15:38.268825    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 01:15:40.396708    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:15:40.396773    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:40.396773    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 01:15:42.884035    7488 main.go:141] libmachine: [stdout =====>] : 192.168.114.152
	
	I0409 01:15:42.884338    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:42.884338    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 01:15:45.021881    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:15:45.022115    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:45.022231    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 01:15:47.543154    7488 main.go:141] libmachine: [stdout =====>] : 192.168.114.152
	
	I0409 01:15:47.543731    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:47.543785    7488 provision.go:143] copyHostCerts
	I0409 01:15:47.543785    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0409 01:15:47.543785    7488 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0409 01:15:47.544305    7488 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0409 01:15:47.544506    7488 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0409 01:15:47.545752    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0409 01:15:47.545752    7488 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0409 01:15:47.546281    7488 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0409 01:15:47.546480    7488 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0409 01:15:47.547835    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0409 01:15:47.547890    7488 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0409 01:15:47.547890    7488 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0409 01:15:47.548419    7488 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1679 bytes)
	I0409 01:15:47.550054    7488 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-611500-m02 san=[127.0.0.1 192.168.114.152 localhost minikube multinode-611500-m02]
	I0409 01:15:47.601818    7488 provision.go:177] copyRemoteCerts
	I0409 01:15:47.609973    7488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0409 01:15:47.609973    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 01:15:49.734646    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:15:49.734646    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:49.734788    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 01:15:52.304326    7488 main.go:141] libmachine: [stdout =====>] : 192.168.114.152
	
	I0409 01:15:52.304468    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:52.305263    7488 sshutil.go:53] new ssh client: &{IP:192.168.114.152 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-611500-m02\id_rsa Username:docker}
	I0409 01:15:52.418555    7488 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8080938s)
	I0409 01:15:52.418601    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0409 01:15:52.419045    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0409 01:15:52.464199    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0409 01:15:52.464595    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0409 01:15:52.512719    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0409 01:15:52.512780    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0409 01:15:52.561075    7488 provision.go:87] duration metric: took 14.2920682s to configureAuth
	I0409 01:15:52.561075    7488 buildroot.go:189] setting minikube options for container-runtime
	I0409 01:15:52.562279    7488 config.go:182] Loaded profile config "multinode-611500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0409 01:15:52.562350    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 01:15:54.686908    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:15:54.687500    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:54.687500    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 01:15:57.188617    7488 main.go:141] libmachine: [stdout =====>] : 192.168.114.152
	
	I0409 01:15:57.188673    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:57.194328    7488 main.go:141] libmachine: Using SSH client type: native
	I0409 01:15:57.194950    7488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.114.152 22 <nil> <nil>}
	I0409 01:15:57.194950    7488 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0409 01:15:57.336329    7488 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0409 01:15:57.336329    7488 buildroot.go:70] root file system type: tmpfs
	I0409 01:15:57.336329    7488 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0409 01:15:57.336329    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 01:15:59.454348    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:15:59.454348    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:15:59.455210    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 01:16:01.959448    7488 main.go:141] libmachine: [stdout =====>] : 192.168.114.152
	
	I0409 01:16:01.959886    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:16:01.965179    7488 main.go:141] libmachine: Using SSH client type: native
	I0409 01:16:01.966055    7488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.114.152 22 <nil> <nil>}
	I0409 01:16:01.966055    7488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.120.172"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0409 01:16:02.136257    7488 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.120.172
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0409 01:16:02.136257    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 01:16:04.246481    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:16:04.247337    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:16:04.247337    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 01:16:06.772922    7488 main.go:141] libmachine: [stdout =====>] : 192.168.114.152
	
	I0409 01:16:06.773715    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:16:06.778866    7488 main.go:141] libmachine: Using SSH client type: native
	I0409 01:16:06.779487    7488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd7d00] 0xbda840 <nil>  [] 0s} 192.168.114.152 22 <nil> <nil>}
	I0409 01:16:06.779487    7488 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0409 01:16:09.221234    7488 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0409 01:16:09.221234    7488 machine.go:96] duration metric: took 45.5391261s to provisionDockerMachine
	I0409 01:16:09.221234    7488 start.go:293] postStartSetup for "multinode-611500-m02" (driver="hyperv")
	I0409 01:16:09.221234    7488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0409 01:16:09.233677    7488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0409 01:16:09.233677    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 01:16:11.439541    7488 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:16:11.439541    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:16:11.440493    7488 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 01:16:13.959202    7488 main.go:141] libmachine: [stdout =====>] : 192.168.114.152
	
	I0409 01:16:13.959202    7488 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:16:13.960674    7488 sshutil.go:53] new ssh client: &{IP:192.168.114.152 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-611500-m02\id_rsa Username:docker}
	I0409 01:16:14.077356    7488 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8436178s)
	I0409 01:16:14.089661    7488 ssh_runner.go:195] Run: cat /etc/os-release
	I0409 01:16:14.096694    7488 command_runner.go:130] > NAME=Buildroot
	I0409 01:16:14.096694    7488 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0409 01:16:14.096694    7488 command_runner.go:130] > ID=buildroot
	I0409 01:16:14.096694    7488 command_runner.go:130] > VERSION_ID=2023.02.9
	I0409 01:16:14.096694    7488 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0409 01:16:14.096694    7488 info.go:137] Remote host: Buildroot 2023.02.9
	I0409 01:16:14.096694    7488 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0409 01:16:14.096694    7488 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0409 01:16:14.097740    7488 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> 98642.pem in /etc/ssl/certs
	I0409 01:16:14.097740    7488 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem -> /etc/ssl/certs/98642.pem
	I0409 01:16:14.107219    7488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0409 01:16:14.125906    7488 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\98642.pem --> /etc/ssl/certs/98642.pem (1708 bytes)
	
	
	==> Docker <==
	Apr 09 01:14:30 multinode-611500 dockerd[1110]: time="2025-04-09T01:14:30.543973586Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 09 01:14:30 multinode-611500 dockerd[1110]: time="2025-04-09T01:14:30.543988286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 09 01:14:30 multinode-611500 dockerd[1110]: time="2025-04-09T01:14:30.544968394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 09 01:14:30 multinode-611500 dockerd[1110]: time="2025-04-09T01:14:30.568872386Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 09 01:14:30 multinode-611500 dockerd[1110]: time="2025-04-09T01:14:30.569041088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 09 01:14:30 multinode-611500 dockerd[1110]: time="2025-04-09T01:14:30.569321390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 09 01:14:30 multinode-611500 dockerd[1110]: time="2025-04-09T01:14:30.569525192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 09 01:14:30 multinode-611500 cri-dockerd[1383]: time="2025-04-09T01:14:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b66ceb95bff48994dd55e61e5dfcbbc0f3c87e8c574f5f5045f22b0eb5f924ce/resolv.conf as [nameserver 192.168.112.1]"
	Apr 09 01:14:30 multinode-611500 cri-dockerd[1383]: time="2025-04-09T01:14:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/772a3f1581a01df8f00f8b1511d2922da23d0a57fbcf3015aeb3543eff11cc7b/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 09 01:14:31 multinode-611500 dockerd[1110]: time="2025-04-09T01:14:31.095510557Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 09 01:14:31 multinode-611500 dockerd[1110]: time="2025-04-09T01:14:31.095594658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 09 01:14:31 multinode-611500 dockerd[1110]: time="2025-04-09T01:14:31.095652259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 09 01:14:31 multinode-611500 dockerd[1110]: time="2025-04-09T01:14:31.095850061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 09 01:14:31 multinode-611500 dockerd[1110]: time="2025-04-09T01:14:31.321299660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 09 01:14:31 multinode-611500 dockerd[1110]: time="2025-04-09T01:14:31.321533463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 09 01:14:31 multinode-611500 dockerd[1110]: time="2025-04-09T01:14:31.321623465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 09 01:14:31 multinode-611500 dockerd[1110]: time="2025-04-09T01:14:31.322027370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 09 01:14:45 multinode-611500 dockerd[1110]: time="2025-04-09T01:14:45.905885937Z" level=info msg="shim disconnected" id=dcb3873adaa5ef7384923efcc50ff68aa7aceb26619aacf9784f082f7f796d7e namespace=moby
	Apr 09 01:14:45 multinode-611500 dockerd[1102]: time="2025-04-09T01:14:45.906779242Z" level=info msg="ignoring event" container=dcb3873adaa5ef7384923efcc50ff68aa7aceb26619aacf9784f082f7f796d7e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 09 01:14:45 multinode-611500 dockerd[1110]: time="2025-04-09T01:14:45.908263150Z" level=warning msg="cleaning up after shim disconnected" id=dcb3873adaa5ef7384923efcc50ff68aa7aceb26619aacf9784f082f7f796d7e namespace=moby
	Apr 09 01:14:45 multinode-611500 dockerd[1110]: time="2025-04-09T01:14:45.908284150Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 09 01:14:58 multinode-611500 dockerd[1110]: time="2025-04-09T01:14:58.459103570Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 09 01:14:58 multinode-611500 dockerd[1110]: time="2025-04-09T01:14:58.459438872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 09 01:14:58 multinode-611500 dockerd[1110]: time="2025-04-09T01:14:58.460356976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 09 01:14:58 multinode-611500 dockerd[1110]: time="2025-04-09T01:14:58.460686378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	efe6fa4908eb9       6e38f40d628db                                                                                         About a minute ago   Running             storage-provisioner       2                   28f1a7c2ab71d       storage-provisioner
	722788a80a860       8c811b4aec35f                                                                                         2 minutes ago        Running             busybox                   1                   772a3f1581a01       busybox-58667487b6-q97dd
	2fdb622025cce       c69fa2e9cbf5f                                                                                         2 minutes ago        Running             coredns                   1                   b66ceb95bff48       coredns-668d6bf9bc-d54s4
	baa63af019cc8       b6a454c5a800d                                                                                         2 minutes ago        Running             kube-controller-manager   2                   1955fb24c1f12       kube-controller-manager-multinode-611500
	d066c2854606e       df3849d954c98                                                                                         2 minutes ago        Running             kindnet-cni               1                   e9e0a619d6643       kindnet-vntlr
	22ec0eeb19291       f1332858868e1                                                                                         2 minutes ago        Running             kube-proxy                1                   d56573ddbcfd4       kube-proxy-zxxgf
	dcb3873adaa5e       6e38f40d628db                                                                                         2 minutes ago        Exited              storage-provisioner       1                   28f1a7c2ab71d       storage-provisioner
	439a6e98215c7       a9e7e6b294baf                                                                                         2 minutes ago        Running             etcd                      0                   d9c8daaa35f32       etcd-multinode-611500
	3ff17ef364ac8       85b7a174738ba                                                                                         2 minutes ago        Running             kube-apiserver            1                   d859fbdb074aa       kube-apiserver-multinode-611500
	bfe205d35dd0a       85b7a174738ba                                                                                         3 minutes ago        Exited              kube-apiserver            0                   d859fbdb074aa       kube-apiserver-multinode-611500
	174a8c157134a       b6a454c5a800d                                                                                         3 minutes ago        Exited              kube-controller-manager   1                   1955fb24c1f12       kube-controller-manager-multinode-611500
	58bc65f15b6a5       d8e673e7c9983                                                                                         3 minutes ago        Running             kube-scheduler            1                   8dafd92fd69da       kube-scheduler-multinode-611500
	b2c663be115f5       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   23 minutes ago       Exited              busybox                   0                   b5dfc9645b5a9       busybox-58667487b6-q97dd
	934a19227cebf       c69fa2e9cbf5f                                                                                         26 minutes ago       Exited              coredns                   0                   5709459d3357e       coredns-668d6bf9bc-d54s4
	14703ff53a0b7       kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495              27 minutes ago       Exited              kindnet-cni               0                   40c7183a37ea2       kindnet-vntlr
	1a9f657c2b5a3       f1332858868e1                                                                                         27 minutes ago       Exited              kube-proxy                0                   0a2ad19ce50fc       kube-proxy-zxxgf
	8fec401b4d086       d8e673e7c9983                                                                                         27 minutes ago       Exited              kube-scheduler            0                   77b1d88aa1629       kube-scheduler-multinode-611500
	
	
	==> coredns [2fdb622025cc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 13ea841c4c8fbfe6051ef394de3a709b16f372b91ce75a0f84114570a1439c0386943c7d15a96a542d0e53e3210046dd57e1217b804f643ed0e91fcf61a6e79e
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:46989 - 7634 "HINFO IN 8623730847466587674.2149690114897580216. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.133448634s
	
	
	==> coredns [934a19227ceb] <==
	[INFO] 10.244.0.3:51460 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000099601s
	[INFO] 10.244.0.3:55687 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000278303s
	[INFO] 10.244.0.3:40394 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000516106s
	[INFO] 10.244.0.3:40522 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000223003s
	[INFO] 10.244.0.3:37860 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000171603s
	[INFO] 10.244.0.3:39917 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000301904s
	[INFO] 10.244.0.3:46701 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000169703s
	[INFO] 10.244.1.2:34733 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156902s
	[INFO] 10.244.1.2:58701 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000161202s
	[INFO] 10.244.1.2:40033 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000199402s
	[INFO] 10.244.1.2:46371 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072001s
	[INFO] 10.244.0.3:36931 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132602s
	[INFO] 10.244.0.3:33483 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000175203s
	[INFO] 10.244.0.3:38836 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000286804s
	[INFO] 10.244.0.3:37565 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000127601s
	[INFO] 10.244.1.2:40936 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000181303s
	[INFO] 10.244.1.2:36358 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000184102s
	[INFO] 10.244.1.2:44504 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000107402s
	[INFO] 10.244.1.2:55001 - 5 "PTR IN 1.112.168.192.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd 106 0.000108502s
	[INFO] 10.244.0.3:32994 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000206602s
	[INFO] 10.244.0.3:57902 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000174602s
	[INFO] 10.244.0.3:43398 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000120602s
	[INFO] 10.244.0.3:39057 - 5 "PTR IN 1.112.168.192.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd 106 0.000086501s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-611500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-611500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd2f4c3eba2bd452b5997c855e28d0966165ba83
	                    minikube.k8s.io/name=multinode-611500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_09T00_49_22_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Apr 2025 00:49:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-611500
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Apr 2025 01:16:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Apr 2025 01:14:27 +0000   Wed, 09 Apr 2025 00:49:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Apr 2025 01:14:27 +0000   Wed, 09 Apr 2025 00:49:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Apr 2025 01:14:27 +0000   Wed, 09 Apr 2025 00:49:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Apr 2025 01:14:27 +0000   Wed, 09 Apr 2025 01:14:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.120.172
	  Hostname:    multinode-611500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 becd87c573e847cb97dd8c788ecda423
	  System UUID:                e993950d-aeba-6b4b-885d-4b2e551f8dbc
	  Boot ID:                    7561aafa-533a-47bc-bf9c-add127fd4d82
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-q97dd                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 coredns-668d6bf9bc-d54s4                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     27m
	  kube-system                 etcd-multinode-611500                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         2m26s
	  kube-system                 kindnet-vntlr                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      27m
	  kube-system                 kube-apiserver-multinode-611500             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-controller-manager-multinode-611500    200m (10%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-proxy-zxxgf                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-scheduler-multinode-611500             100m (5%)     0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 27m                    kube-proxy       
	  Normal   Starting                 2m24s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  27m (x8 over 27m)      kubelet          Node multinode-611500 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    27m (x8 over 27m)      kubelet          Node multinode-611500 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     27m (x7 over 27m)      kubelet          Node multinode-611500 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    27m                    kubelet          Node multinode-611500 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  27m                    kubelet          Node multinode-611500 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     27m                    kubelet          Node multinode-611500 status is now: NodeHasSufficientPID
	  Normal   Starting                 27m                    kubelet          Starting kubelet.
	  Normal   RegisteredNode           27m                    node-controller  Node multinode-611500 event: Registered Node multinode-611500 in Controller
	  Normal   NodeReady                26m                    kubelet          Node multinode-611500 status is now: NodeReady
	  Normal   Starting                 3m13s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  3m13s (x8 over 3m13s)  kubelet          Node multinode-611500 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m13s (x8 over 3m13s)  kubelet          Node multinode-611500 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m13s (x7 over 3m13s)  kubelet          Node multinode-611500 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  3m13s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 2m27s                  kubelet          Node multinode-611500 has been rebooted, boot id: 7561aafa-533a-47bc-bf9c-add127fd4d82
	  Normal   RegisteredNode           2m7s                   node-controller  Node multinode-611500 event: Registered Node multinode-611500 in Controller
	
	
	Name:               multinode-611500-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-611500-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd2f4c3eba2bd452b5997c855e28d0966165ba83
	                    minikube.k8s.io/name=multinode-611500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_04_09T00_52_33_0700
	                    minikube.k8s.io/version=v1.35.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Apr 2025 00:52:33 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-611500-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Apr 2025 01:10:18 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 09 Apr 2025 01:10:14 +0000   Wed, 09 Apr 2025 01:15:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 09 Apr 2025 01:10:14 +0000   Wed, 09 Apr 2025 01:15:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 09 Apr 2025 01:10:14 +0000   Wed, 09 Apr 2025 01:15:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 09 Apr 2025 01:10:14 +0000   Wed, 09 Apr 2025 01:15:23 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.113.143
	  Hostname:    multinode-611500-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 ddab213e0f664ee998425119dc3e7a46
	  System UUID:                2b3ed102-bc59-9642-a45e-e1d26e5f9a17
	  Boot ID:                    6aa204db-bc4c-4b56-ad49-3d6e873355d1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-c426d    0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kindnet-66fr6               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      24m
	  kube-system                 kube-proxy-bhjnx            0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     24m                cidrAllocator    Node multinode-611500-m02 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  24m (x2 over 24m)  kubelet          Node multinode-611500-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m (x2 over 24m)  kubelet          Node multinode-611500-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m (x2 over 24m)  kubelet          Node multinode-611500-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24m                node-controller  Node multinode-611500-m02 event: Registered Node multinode-611500-m02 in Controller
	  Normal  NodeReady                23m                kubelet          Node multinode-611500-m02 status is now: NodeReady
	  Normal  RegisteredNode           2m7s               node-controller  Node multinode-611500-m02 event: Registered Node multinode-611500-m02 in Controller
	  Normal  NodeNotReady             77s                node-controller  Node multinode-611500-m02 status is now: NodeNotReady
	
	
	Name:               multinode-611500-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-611500-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd2f4c3eba2bd452b5997c855e28d0966165ba83
	                    minikube.k8s.io/name=multinode-611500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_04_09T01_08_49_0700
	                    minikube.k8s.io/version=v1.35.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Apr 2025 01:08:48 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-611500-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Apr 2025 01:09:59 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 09 Apr 2025 01:09:06 +0000   Wed, 09 Apr 2025 01:10:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 09 Apr 2025 01:09:06 +0000   Wed, 09 Apr 2025 01:10:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 09 Apr 2025 01:09:06 +0000   Wed, 09 Apr 2025 01:10:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 09 Apr 2025 01:09:06 +0000   Wed, 09 Apr 2025 01:10:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.116.185
	  Hostname:    multinode-611500-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 c187e8656c664cbc87a09317d63bbf88
	  System UUID:                a8caa263-2792-df4d-9441-ad2df4ef16f6
	  Boot ID:                    e099429a-3630-4174-9c5b-9cfcedbb99c4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.5.0/24
	PodCIDRs:                     10.244.5.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-v66j5       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      19m
	  kube-system                 kube-proxy-xnh8p    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 19m                    kube-proxy       
	  Normal  Starting                 7m49s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  19m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m (x2 over 19m)      kubelet          Node multinode-611500-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x2 over 19m)      kubelet          Node multinode-611500-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x2 over 19m)      kubelet          Node multinode-611500-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                18m                    kubelet          Node multinode-611500-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  7m52s (x2 over 7m52s)  kubelet          Node multinode-611500-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m52s (x2 over 7m52s)  kubelet          Node multinode-611500-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m52s (x2 over 7m52s)  kubelet          Node multinode-611500-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m50s                  node-controller  Node multinode-611500-m03 event: Registered Node multinode-611500-m03 in Controller
	  Normal  NodeReady                7m34s                  kubelet          Node multinode-611500-m03 status is now: NodeReady
	  Normal  NodeNotReady             5m50s                  node-controller  Node multinode-611500-m03 status is now: NodeNotReady
	  Normal  RegisteredNode           2m7s                   node-controller  Node multinode-611500-m03 event: Registered Node multinode-611500-m03 in Controller
	
	
	==> dmesg <==
	              * this clock source is slow. Consider trying other clock sources
	[  +5.553976] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.337734] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.269277] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[Apr 9 01:12] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000006] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +50.397895] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.177209] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[Apr 9 01:13] systemd-fstab-generator[1030]: Ignoring "noauto" option for root device
	[  +0.088454] kauditd_printk_skb: 75 callbacks suppressed
	[  +0.566409] systemd-fstab-generator[1068]: Ignoring "noauto" option for root device
	[  +0.184982] systemd-fstab-generator[1080]: Ignoring "noauto" option for root device
	[  +0.230862] systemd-fstab-generator[1094]: Ignoring "noauto" option for root device
	[  +2.979360] systemd-fstab-generator[1336]: Ignoring "noauto" option for root device
	[  +0.189393] systemd-fstab-generator[1348]: Ignoring "noauto" option for root device
	[  +0.181345] systemd-fstab-generator[1360]: Ignoring "noauto" option for root device
	[  +0.273715] systemd-fstab-generator[1375]: Ignoring "noauto" option for root device
	[  +0.892003] systemd-fstab-generator[1502]: Ignoring "noauto" option for root device
	[  +0.091943] kauditd_printk_skb: 206 callbacks suppressed
	[  +3.115435] systemd-fstab-generator[1645]: Ignoring "noauto" option for root device
	[  +1.904460] kauditd_printk_skb: 64 callbacks suppressed
	[Apr 9 01:14] systemd-fstab-generator[2456]: Ignoring "noauto" option for root device
	[  +4.254879] kauditd_printk_skb: 70 callbacks suppressed
	[ +25.489492] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [439a6e98215c] <==
	{"level":"info","ts":"2025-04-09T01:14:02.474947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7c5cb9f55cc68db1 switched to configuration voters=(8961241822035086769)"}
	{"level":"info","ts":"2025-04-09T01:14:02.475087Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c0f2dc799db55cf1","local-member-id":"7c5cb9f55cc68db1","added-peer-id":"7c5cb9f55cc68db1","added-peer-peer-urls":["https://192.168.113.157:2380"]}
	{"level":"info","ts":"2025-04-09T01:14:02.475413Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c0f2dc799db55cf1","local-member-id":"7c5cb9f55cc68db1","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-09T01:14:02.475449Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-09T01:14:02.482491Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-09T01:14:02.483367Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"7c5cb9f55cc68db1","initial-advertise-peer-urls":["https://192.168.120.172:2380"],"listen-peer-urls":["https://192.168.120.172:2380"],"advertise-client-urls":["https://192.168.120.172:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.120.172:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-09T01:14:02.483419Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-09T01:14:02.483569Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.120.172:2380"}
	{"level":"info","ts":"2025-04-09T01:14:02.483601Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.120.172:2380"}
	{"level":"info","ts":"2025-04-09T01:14:03.937838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7c5cb9f55cc68db1 is starting a new election at term 2"}
	{"level":"info","ts":"2025-04-09T01:14:03.937956Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7c5cb9f55cc68db1 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-04-09T01:14:03.938019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7c5cb9f55cc68db1 received MsgPreVoteResp from 7c5cb9f55cc68db1 at term 2"}
	{"level":"info","ts":"2025-04-09T01:14:03.938039Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7c5cb9f55cc68db1 became candidate at term 3"}
	{"level":"info","ts":"2025-04-09T01:14:03.938049Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7c5cb9f55cc68db1 received MsgVoteResp from 7c5cb9f55cc68db1 at term 3"}
	{"level":"info","ts":"2025-04-09T01:14:03.938076Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7c5cb9f55cc68db1 became leader at term 3"}
	{"level":"info","ts":"2025-04-09T01:14:03.938085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7c5cb9f55cc68db1 elected leader 7c5cb9f55cc68db1 at term 3"}
	{"level":"info","ts":"2025-04-09T01:14:03.948009Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-09T01:14:03.947930Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"7c5cb9f55cc68db1","local-member-attributes":"{Name:multinode-611500 ClientURLs:[https://192.168.120.172:2379]}","request-path":"/0/members/7c5cb9f55cc68db1/attributes","cluster-id":"c0f2dc799db55cf1","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-09T01:14:03.951129Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-09T01:14:03.953036Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.120.172:2379"}
	{"level":"info","ts":"2025-04-09T01:14:03.953878Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-09T01:14:03.955017Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-09T01:14:03.956180Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-09T01:14:03.956722Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-09T01:14:03.956791Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 01:16:40 up 4 min,  0 users,  load average: 0.73, 0.66, 0.29
	Linux multinode-611500 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [14703ff53a0b] <==
	I0409 01:10:15.577363       1 main.go:324] Node multinode-611500-m02 has CIDR [10.244.1.0/24] 
	I0409 01:10:25.576227       1 main.go:297] Handling node with IPs: map[192.168.113.157:{}]
	I0409 01:10:25.576395       1 main.go:301] handling current node
	I0409 01:10:25.576417       1 main.go:297] Handling node with IPs: map[192.168.113.143:{}]
	I0409 01:10:25.576427       1 main.go:324] Node multinode-611500-m02 has CIDR [10.244.1.0/24] 
	I0409 01:10:25.577700       1 main.go:297] Handling node with IPs: map[192.168.116.185:{}]
	I0409 01:10:25.577881       1 main.go:324] Node multinode-611500-m03 has CIDR [10.244.5.0/24] 
	I0409 01:10:35.575165       1 main.go:297] Handling node with IPs: map[192.168.116.185:{}]
	I0409 01:10:35.575229       1 main.go:324] Node multinode-611500-m03 has CIDR [10.244.5.0/24] 
	I0409 01:10:35.575414       1 main.go:297] Handling node with IPs: map[192.168.113.157:{}]
	I0409 01:10:35.575426       1 main.go:301] handling current node
	I0409 01:10:35.575439       1 main.go:297] Handling node with IPs: map[192.168.113.143:{}]
	I0409 01:10:35.575464       1 main.go:324] Node multinode-611500-m02 has CIDR [10.244.1.0/24] 
	I0409 01:10:45.575152       1 main.go:297] Handling node with IPs: map[192.168.113.157:{}]
	I0409 01:10:45.575208       1 main.go:301] handling current node
	I0409 01:10:45.575226       1 main.go:297] Handling node with IPs: map[192.168.113.143:{}]
	I0409 01:10:45.575232       1 main.go:324] Node multinode-611500-m02 has CIDR [10.244.1.0/24] 
	I0409 01:10:45.575654       1 main.go:297] Handling node with IPs: map[192.168.116.185:{}]
	I0409 01:10:45.575753       1 main.go:324] Node multinode-611500-m03 has CIDR [10.244.5.0/24] 
	I0409 01:10:55.575220       1 main.go:297] Handling node with IPs: map[192.168.113.157:{}]
	I0409 01:10:55.575366       1 main.go:301] handling current node
	I0409 01:10:55.575393       1 main.go:297] Handling node with IPs: map[192.168.113.143:{}]
	I0409 01:10:55.575420       1 main.go:324] Node multinode-611500-m02 has CIDR [10.244.1.0/24] 
	I0409 01:10:55.576129       1 main.go:297] Handling node with IPs: map[192.168.116.185:{}]
	I0409 01:10:55.576168       1 main.go:324] Node multinode-611500-m03 has CIDR [10.244.5.0/24] 
	
	
	==> kindnet [d066c2854606] <==
	I0409 01:15:57.534224       1 main.go:324] Node multinode-611500-m03 has CIDR [10.244.5.0/24] 
	I0409 01:16:07.537000       1 main.go:297] Handling node with IPs: map[192.168.120.172:{}]
	I0409 01:16:07.537039       1 main.go:301] handling current node
	I0409 01:16:07.537058       1 main.go:297] Handling node with IPs: map[192.168.113.143:{}]
	I0409 01:16:07.537065       1 main.go:324] Node multinode-611500-m02 has CIDR [10.244.1.0/24] 
	I0409 01:16:07.537428       1 main.go:297] Handling node with IPs: map[192.168.116.185:{}]
	I0409 01:16:07.537458       1 main.go:324] Node multinode-611500-m03 has CIDR [10.244.5.0/24] 
	I0409 01:16:17.532496       1 main.go:297] Handling node with IPs: map[192.168.116.185:{}]
	I0409 01:16:17.532594       1 main.go:324] Node multinode-611500-m03 has CIDR [10.244.5.0/24] 
	I0409 01:16:17.533302       1 main.go:297] Handling node with IPs: map[192.168.120.172:{}]
	I0409 01:16:17.533334       1 main.go:301] handling current node
	I0409 01:16:17.533349       1 main.go:297] Handling node with IPs: map[192.168.113.143:{}]
	I0409 01:16:17.533355       1 main.go:324] Node multinode-611500-m02 has CIDR [10.244.1.0/24] 
	I0409 01:16:27.538864       1 main.go:297] Handling node with IPs: map[192.168.113.143:{}]
	I0409 01:16:27.538980       1 main.go:324] Node multinode-611500-m02 has CIDR [10.244.1.0/24] 
	I0409 01:16:27.539356       1 main.go:297] Handling node with IPs: map[192.168.116.185:{}]
	I0409 01:16:27.539433       1 main.go:324] Node multinode-611500-m03 has CIDR [10.244.5.0/24] 
	I0409 01:16:27.539765       1 main.go:297] Handling node with IPs: map[192.168.120.172:{}]
	I0409 01:16:27.539782       1 main.go:301] handling current node
	I0409 01:16:37.540525       1 main.go:297] Handling node with IPs: map[192.168.116.185:{}]
	I0409 01:16:37.540639       1 main.go:324] Node multinode-611500-m03 has CIDR [10.244.5.0/24] 
	I0409 01:16:37.541377       1 main.go:297] Handling node with IPs: map[192.168.120.172:{}]
	I0409 01:16:37.541464       1 main.go:301] handling current node
	I0409 01:16:37.541481       1 main.go:297] Handling node with IPs: map[192.168.113.143:{}]
	I0409 01:16:37.541489       1 main.go:324] Node multinode-611500-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [3ff17ef364ac] <==
	I0409 01:14:09.294459       1 aggregator.go:171] initial CRD sync complete...
	I0409 01:14:09.294507       1 autoregister_controller.go:144] Starting autoregister controller
	I0409 01:14:09.294516       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0409 01:14:09.295724       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0409 01:14:09.320357       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0409 01:14:09.352629       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0409 01:14:09.369860       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0409 01:14:09.369876       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0409 01:14:09.373668       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0409 01:14:09.373939       1 shared_informer.go:320] Caches are synced for configmaps
	I0409 01:14:09.374278       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0409 01:14:09.374468       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0409 01:14:09.375919       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0409 01:14:09.405033       1 cache.go:39] Caches are synced for autoregister controller
	I0409 01:14:09.984390       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0409 01:14:10.394620       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.113.157 192.168.120.172]
	I0409 01:14:10.398029       1 controller.go:615] quota admission added evaluator for: endpoints
	I0409 01:14:10.407902       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0409 01:14:11.547799       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0409 01:14:11.729234       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0409 01:14:11.756387       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0409 01:14:11.863000       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0409 01:14:11.873537       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0409 01:14:20.394864       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.120.172]
	I0409 01:15:23.864135       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-apiserver [bfe205d35dd0] <==
	I0409 01:13:29.448582       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0409 01:13:30.164513       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	W0409 01:13:30.165390       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0409 01:13:30.165743       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0409 01:13:30.204268       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0409 01:13:30.206767       1 plugins.go:157] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0409 01:13:30.206806       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0409 01:13:30.207745       1 instance.go:233] Using reconciler: lease
	W0409 01:13:30.209539       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0409 01:13:31.166304       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0409 01:13:31.166345       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0409 01:13:31.211126       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0409 01:13:32.465921       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0409 01:13:32.889838       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0409 01:13:32.939493       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0409 01:13:35.484575       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0409 01:13:35.574619       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0409 01:13:35.876273       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0409 01:13:39.567662       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0409 01:13:39.648578       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0409 01:13:40.247935       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0409 01:13:46.135614       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0409 01:13:46.325145       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0409 01:13:47.822007       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0409 01:13:50.209601       1 instance.go:226] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [174a8c157134] <==
	I0409 01:13:29.593730       1 serving.go:386] Generated self-signed cert in-memory
	I0409 01:13:30.100646       1 controllermanager.go:185] "Starting" version="v1.32.2"
	I0409 01:13:30.100695       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0409 01:13:30.104792       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0409 01:13:30.107215       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0409 01:13:30.107736       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0409 01:13:30.108090       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0409 01:14:09.136267       1 controllermanager.go:230] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [baa63af019cc] <==
	I0409 01:14:33.698802       1 shared_informer.go:320] Caches are synced for taint
	I0409 01:14:33.699054       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0409 01:14:33.698863       1 shared_informer.go:320] Caches are synced for deployment
	I0409 01:14:33.702239       1 shared_informer.go:320] Caches are synced for expand
	I0409 01:14:33.699499       1 shared_informer.go:320] Caches are synced for attach detach
	I0409 01:14:33.706692       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0409 01:14:33.711948       1 shared_informer.go:320] Caches are synced for resource quota
	I0409 01:14:33.713457       1 shared_informer.go:320] Caches are synced for GC
	I0409 01:14:33.714787       1 shared_informer.go:320] Caches are synced for HPA
	I0409 01:14:33.725261       1 shared_informer.go:320] Caches are synced for persistent volume
	I0409 01:14:33.735062       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-611500-m02"
	I0409 01:14:33.735117       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-611500-m03"
	I0409 01:14:33.735942       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-611500"
	I0409 01:14:33.736316       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0409 01:14:33.736650       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-611500-m03"
	I0409 01:14:33.738198       1 shared_informer.go:320] Caches are synced for garbage collector
	I0409 01:14:33.738285       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0409 01:14:33.738358       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0409 01:14:33.741000       1 shared_informer.go:320] Caches are synced for ephemeral
	I0409 01:14:33.819008       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-611500-m03"
	I0409 01:15:23.768634       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-611500-m02"
	I0409 01:15:23.797277       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-611500-m02"
	I0409 01:15:23.876376       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="30.307467ms"
	I0409 01:15:23.876456       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="30.9µs"
	I0409 01:15:28.898250       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-611500-m02"
	
	
	==> kube-proxy [1a9f657c2b5a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0409 00:49:28.039254       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0409 00:49:28.086921       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.113.157"]
	E0409 00:49:28.087603       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0409 00:49:28.163284       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0409 00:49:28.163425       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0409 00:49:28.163503       1 server_linux.go:170] "Using iptables Proxier"
	I0409 00:49:28.168549       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0409 00:49:28.170109       1 server.go:497] "Version info" version="v1.32.2"
	I0409 00:49:28.170208       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0409 00:49:28.177841       1 config.go:199] "Starting service config controller"
	I0409 00:49:28.177990       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0409 00:49:28.178013       1 config.go:105] "Starting endpoint slice config controller"
	I0409 00:49:28.178058       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0409 00:49:28.180425       1 config.go:329] "Starting node config controller"
	I0409 00:49:28.180604       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0409 00:49:28.278851       1 shared_informer.go:320] Caches are synced for service config
	I0409 00:49:28.278861       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0409 00:49:28.283571       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [22ec0eeb1929] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0409 01:14:16.377256       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0409 01:14:16.420631       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.120.172"]
	E0409 01:14:16.420790       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0409 01:14:16.481782       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0409 01:14:16.481915       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0409 01:14:16.482105       1 server_linux.go:170] "Using iptables Proxier"
	I0409 01:14:16.486846       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0409 01:14:16.488277       1 server.go:497] "Version info" version="v1.32.2"
	I0409 01:14:16.488384       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0409 01:14:16.494389       1 config.go:199] "Starting service config controller"
	I0409 01:14:16.495292       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0409 01:14:16.495718       1 config.go:329] "Starting node config controller"
	I0409 01:14:16.495887       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0409 01:14:16.496066       1 config.go:105] "Starting endpoint slice config controller"
	I0409 01:14:16.496118       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0409 01:14:16.596598       1 shared_informer.go:320] Caches are synced for service config
	I0409 01:14:16.596619       1 shared_informer.go:320] Caches are synced for node config
	I0409 01:14:16.596612       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [58bc65f15b6a] <==
	W0409 01:14:09.241386       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0409 01:14:09.245062       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0409 01:14:09.241408       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0409 01:14:09.245825       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0409 01:14:09.241439       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0409 01:14:09.246588       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0409 01:14:09.241466       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0409 01:14:09.246838       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0409 01:14:09.241486       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0409 01:14:09.247258       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0409 01:14:09.241550       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0409 01:14:09.247489       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0409 01:14:09.241578       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0409 01:14:09.247861       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0409 01:14:09.241605       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0409 01:14:09.249020       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0409 01:14:09.241635       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0409 01:14:09.249296       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0409 01:14:09.241660       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0409 01:14:09.249623       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0409 01:14:09.241688       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	W0409 01:14:09.241712       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0409 01:14:09.250473       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0409 01:14:09.250794       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0409 01:14:10.838028       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [8fec401b4d08] <==
	W0409 00:49:18.589582       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0409 00:49:18.589843       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0409 00:49:18.692182       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0409 00:49:18.692231       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0409 00:49:18.809191       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0409 00:49:18.809632       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0409 00:49:18.829593       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0409 00:49:18.829649       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0409 00:49:18.852706       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0409 00:49:18.852800       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0409 00:49:18.853226       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0409 00:49:18.853480       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0409 00:49:18.913033       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0409 00:49:18.913078       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0409 00:49:18.998014       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0409 00:49:18.998208       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0409 00:49:19.016126       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0409 00:49:19.016344       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0409 00:49:19.134507       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0409 00:49:19.134933       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0409 00:49:21.742091       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0409 01:11:02.423141       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0409 01:11:02.426110       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0409 01:11:02.437269       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0409 01:11:02.548264       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 09 01:14:26 multinode-611500 kubelet[1652]: E0409 01:14:26.248203    1652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-d54s4" podUID="12431f27-7e4e-41c9-8d54-bc7be2074b9c"
	Apr 09 01:14:27 multinode-611500 kubelet[1652]: E0409 01:14:27.245465    1652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-q97dd" podUID="2cd940b8-79aa-4c21-95f0-9ea66a73cd4a"
	Apr 09 01:14:27 multinode-611500 kubelet[1652]: I0409 01:14:27.267935    1652 scope.go:117] "RemoveContainer" containerID="45eca668cef5527a4dcd2cde8e474ed6a6f13496145ec4ee341527212a317808"
	Apr 09 01:14:27 multinode-611500 kubelet[1652]: E0409 01:14:27.283760    1652 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 09 01:14:27 multinode-611500 kubelet[1652]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 09 01:14:27 multinode-611500 kubelet[1652]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 09 01:14:27 multinode-611500 kubelet[1652]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 09 01:14:27 multinode-611500 kubelet[1652]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 09 01:14:27 multinode-611500 kubelet[1652]: I0409 01:14:27.322353    1652 scope.go:117] "RemoveContainer" containerID="9698a4747b5a1b2d3b10f5d8810c6f7ad448fa9bc74c3dbf1750ec60134408d5"
	Apr 09 01:14:27 multinode-611500 kubelet[1652]: I0409 01:14:27.990971    1652 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
	Apr 09 01:14:30 multinode-611500 kubelet[1652]: I0409 01:14:30.246406    1652 scope.go:117] "RemoveContainer" containerID="174a8c157134ab1ebb6773df8e9a28dff26ef29ce2450e568f270bd482061efb"
	Apr 09 01:14:46 multinode-611500 kubelet[1652]: I0409 01:14:46.482823    1652 scope.go:117] "RemoveContainer" containerID="81bdf2c1b915ffb16109129ef4772ee39e60258a13d7be56eb8abcf22788607d"
	Apr 09 01:14:46 multinode-611500 kubelet[1652]: I0409 01:14:46.483213    1652 scope.go:117] "RemoveContainer" containerID="dcb3873adaa5ef7384923efcc50ff68aa7aceb26619aacf9784f082f7f796d7e"
	Apr 09 01:14:46 multinode-611500 kubelet[1652]: E0409 01:14:46.483427    1652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8f7ea37f-c3a7-44fc-ac99-c184b674aca3)\"" pod="kube-system/storage-provisioner" podUID="8f7ea37f-c3a7-44fc-ac99-c184b674aca3"
	Apr 09 01:14:58 multinode-611500 kubelet[1652]: I0409 01:14:58.246486    1652 scope.go:117] "RemoveContainer" containerID="dcb3873adaa5ef7384923efcc50ff68aa7aceb26619aacf9784f082f7f796d7e"
	Apr 09 01:15:27 multinode-611500 kubelet[1652]: E0409 01:15:27.280430    1652 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 09 01:15:27 multinode-611500 kubelet[1652]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 09 01:15:27 multinode-611500 kubelet[1652]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 09 01:15:27 multinode-611500 kubelet[1652]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 09 01:15:27 multinode-611500 kubelet[1652]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 09 01:16:27 multinode-611500 kubelet[1652]: E0409 01:16:27.280547    1652 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 09 01:16:27 multinode-611500 kubelet[1652]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 09 01:16:27 multinode-611500 kubelet[1652]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 09 01:16:27 multinode-611500 kubelet[1652]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 09 01:16:27 multinode-611500 kubelet[1652]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-611500 -n multinode-611500
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-611500 -n multinode-611500: (12.2890111s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-611500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (432.18s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (10800.411s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.2574181992.exe start -p running-upgrade-014400 --memory=2200 --vm-driver=hyperv
E0409 01:38:10.559055    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.2574181992.exe start -p running-upgrade-014400 --memory=2200 --vm-driver=hyperv: (6m31.7039988s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-014400 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
panic: test timed out after 3h0m0s
	running tests:
		TestCertExpiration (1m58s)
		TestKubernetesUpgrade (6m6s)
		TestNetworkPlugins (11m13s)
		TestRunningBinaryUpgrade (7m3s)
		TestStoppedBinaryUpgrade (4m41s)
		TestStoppedBinaryUpgrade/Upgrade (4m40s)

                                                
                                                
goroutine 1758 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2484 +0x394
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 1 [chan receive, 7 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1753 +0x486
testing.tRunner(0xc000606c40, 0xc001487bc8)
	/usr/local/go/src/testing/testing.go:1798 +0x104
testing.runTests(0xc00048a0a8, {0x5927320, 0x2b, 0x2b}, {0xffffffffffffffff?, 0xc000912270?, 0x594e6a0?})
	/usr/local/go/src/testing/testing.go:2277 +0x4b4
testing.(*M).Run(0xc0005c0000)
	/usr/local/go/src/testing/testing.go:2142 +0x64a
k8s.io/minikube/test/integration.TestMain(0xc0005c0000)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0xa8

                                                
                                                
goroutine 671 [IO wait, 161 minutes]:
internal/poll.runtime_pollWait(0x1a1c4991090, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0x93cbb3?, 0x891a76?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc0005d02a0, 0xc0014dbba0)
	/usr/local/go/src/internal/poll/fd_windows.go:177 +0x105
internal/poll.(*FD).acceptOne(0xc0005d0288, 0x360, {0xc0009e2000?, 0xc0014dbc00?, 0x9472e5?}, 0xc0014dbc34?)
	/usr/local/go/src/internal/poll/fd_windows.go:946 +0x65
internal/poll.(*FD).Accept(0xc0005d0288, 0xc0014dbd80)
	/usr/local/go/src/internal/poll/fd_windows.go:980 +0x1b6
net.(*netFD).accept(0xc0005d0288)
	/usr/local/go/src/net/fd_windows.go:182 +0x4b
net.(*TCPListener).accept(0xc0004aa680)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1b
net.(*TCPListener).Accept(0xc0004aa680)
	/usr/local/go/src/net/tcpsock.go:380 +0x30
net/http.(*Server).Serve(0xc000168600, {0x3ed72a0, 0xc0004aa680})
	/usr/local/go/src/net/http/server.go:3424 +0x30c
net/http.(*Server).ListenAndServe(0xc000168600)
	/usr/local/go/src/net/http/server.go:3350 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2230
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 668
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2229 +0x129

                                                
                                                
goroutine 124 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3ef9700)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/delaying_queue.go:311 +0x345
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 123
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/delaying_queue.go:148 +0x245

                                                
                                                
goroutine 125 [chan receive, 173 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001408580, 0xc000078230)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 123
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cache.go:122 +0x569

                                                
                                                
goroutine 161 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc001408550, 0x3c)
	/usr/local/go/src/runtime/sema.go:597 +0x15d
sync.(*Cond).Wait(0xc001665d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3efc5c0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/queue.go:277 +0x86
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001408580)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:159 +0x44
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000580008?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000894400, {0x3ea97a0, 0xc001406480}, 0x1, 0xc000078230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000894400, 0x3b9aca00, 0x0, 0x1, 0xc000078230)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 125
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 178 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3ee88f0, 0xc000078230}, 0xc001367f50, 0xc001367f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3ee88f0, 0xc000078230}, 0x90?, 0xc001367f50, 0xc001367f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3ee88f0?, 0xc000078230?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001367fd0?, 0xa0cc04?, 0xc000a66af0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 125
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 179 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 178
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1746 [syscall, 5 minutes]:
syscall.Syscall6(0x1a1c4db1478?, 0x1a1ff4c0a38?, 0x400?, 0xc000480008?, 0xc001374000?, 0xc00135fbf0?, 0x8e8659?, 0xc00086c340?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x5b0, {0xc001374200?, 0x200, 0x93df1f?}, 0x400?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1020 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:451
syscall.Read(0xc0006b9d48?, {0xc001374200?, 0x0?, 0xc00135fcb0?})
	/usr/local/go/src/syscall/syscall_windows.go:430 +0x2d
internal/poll.(*FD).Read(0xc0006b9d48, {0xc001374200, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00098a1a0, {0xc001374200?, 0x886d3f?, 0x2c95420?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc0013d44e0, {0x3ea7ce0, 0xc00098a1d0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3ea7e60, 0xc0013d44e0}, {0x3ea7ce0, 0xc00098a1d0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3ea7e60, 0xc0013d44e0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x58cf2a0?, {0x3ea7e60?, 0xc0013d44e0?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x3ea7e60, 0xc0013d44e0}, {0x3ea7dc0, 0xc00098a1a0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc00173cb60?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1732
	/usr/local/go/src/os/exec/exec.go:748 +0x9c5

                                                
                                                
goroutine 606 [chan receive, 11 minutes]:
testing.(*testState).waitParallel(0xc0004be050)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc00139a000)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc00139a000)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestDockerFlags(0xc00139a000)
	/home/jenkins/workspace/Build_Cross/test/integration/docker_test.go:43 +0xf8
testing.tRunner(0xc00139a000, 0x3b4f7b8)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 1732 [syscall, 5 minutes]:
syscall.Syscall(0xc0017a3808?, 0x0?, 0x9d043b?, 0x1000000000000?, 0x4b?)
	/usr/local/go/src/runtime/syscall_windows.go:457 +0x29
syscall.WaitForSingleObject(0x714, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1149 +0x5a
os.(*Process).wait(0xc000780780?)
	/usr/local/go/src/os/exec_windows.go:28 +0xe6
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc000780780)
	/usr/local/go/src/os/exec/exec.go:922 +0x45
os/exec.(*Cmd).Run(0xc000780780)
	/usr/local/go/src/os/exec/exec.go:626 +0x2d
k8s.io/minikube/test/integration.Run(0xc00167e1c0, 0xc000780780)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade.func2.1()
	/home/jenkins/workspace/Build_Cross/test/integration/version_upgrade_test.go:183 +0x36d
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer.Operation.withEmptyData.func1()
	/home/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:18 +0x13
github.com/cenkalti/backoff/v4.doRetryNotify[...](0xc0017a3c50?, {0x3eca178, 0xc0017b2da0}, 0x3b50ac0, {0x0, 0x0?})
	/home/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:88 +0x11c
github.com/cenkalti/backoff/v4.RetryNotifyWithTimer(0x0?, {0x3eca178?, 0xc0017b2da0?}, 0x40?, {0x0?, 0x0?})
	/home/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:61 +0x56
github.com/cenkalti/backoff/v4.RetryNotify(...)
	/home/jenkins/go/pkg/mod/github.com/cenkalti/backoff/v4@v4.3.0/retry.go:49
k8s.io/minikube/pkg/util/retry.Expo(0xc000773e28, 0x3b9aca00, 0x1a3185c5000, {0xc000773d38?, 0x2ad04a0?, 0x402d16f?})
	/home/jenkins/workspace/Build_Cross/pkg/util/retry/retry.go:60 +0xe5
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade.func2(0xc00167e1c0)
	/home/jenkins/workspace/Build_Cross/test/integration/version_upgrade_test.go:188 +0x2b0
testing.tRunner(0xc00167e1c0, 0xc000956ec0)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 1572
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 1755 [syscall]:
syscall.Syscall6(0x1a1c4daee58?, 0x1a1ff4c0ed0?, 0x800?, 0xc000580008?, 0xc00151f000?, 0xc001363bf0?, 0x8e8659?, 0xc0019928f0?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x2d4, {0xc00151f26f?, 0x591, 0x93df1f?}, 0x800?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1020 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:451
syscall.Read(0xc001492488?, {0xc00151f26f?, 0x0?, 0xc001363cb0?})
	/usr/local/go/src/syscall/syscall_windows.go:430 +0x2d
internal/poll.(*FD).Read(0xc001492488, {0xc00151f26f, 0x591, 0x591})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0006b40a0, {0xc00151f26f?, 0x886d3f?, 0x2c95420?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc0013d4090, {0x3ea7ce0, 0xc00090c058})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3ea7e60, 0xc0013d4090}, {0x3ea7ce0, 0xc00090c058}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3ea7e60, 0xc0013d4090})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x58cf2a0?, {0x3ea7e60?, 0xc0013d4090?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x3ea7e60, 0xc0013d4090}, {0x3ea7dc0, 0xc0006b40a0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0x3b4f7e0?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1571
	/usr/local/go/src/os/exec/exec.go:748 +0x9c5

                                                
                                                
goroutine 1525 [chan receive, 11 minutes]:
testing.(*T).Run(0xc000587880, {0x31b4d55?, 0xc0013bdf60?}, 0xc000abc120)
	/usr/local/go/src/testing/testing.go:1859 +0x414
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc000587880)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:52 +0xd3
testing.tRunner(0xc000587880, 0x3b4f890)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 1548 [chan receive, 11 minutes]:
testing.(*testState).waitParallel(0xc0004be050)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc00150c8c0)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc00150c8c0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00150c8c0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc00150c8c0, 0xc0006bc180)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 1545
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 1586 [chan receive, 11 minutes]:
testing.(*testState).waitParallel(0xc0004be050)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc00150d340)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc00150d340)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00150d340)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc00150d340, 0xc0006bcb80)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 1545
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 1547 [chan receive, 11 minutes]:
testing.(*testState).waitParallel(0xc0004be050)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc00150c700)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc00150c700)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00150c700)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc00150c700, 0xc0006bc100)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 1545
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 1572 [chan receive, 5 minutes]:
testing.(*T).Run(0xc000020540, {0x31b89e6?, 0x3005753e800?}, 0xc000956ec0)
	/usr/local/go/src/testing/testing.go:1859 +0x414
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade(0xc000020540)
	/home/jenkins/workspace/Build_Cross/test/integration/version_upgrade_test.go:160 +0x2ab
testing.tRunner(0xc000020540, 0x3b4f8e0)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 638 [chan receive, 11 minutes]:
testing.(*testState).waitParallel(0xc0004be050)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc001878000)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc001878000)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestCertOptions(0xc001878000)
	/home/jenkins/workspace/Build_Cross/test/integration/cert_options_test.go:36 +0x87
testing.tRunner(0xc001878000, 0x3b4f7a8)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 1527 [chan receive, 11 minutes]:
testing.(*testState).waitParallel(0xc0004be050)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc000aa9180)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000aa9180)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestPause(0xc000aa9180)
	/home/jenkins/workspace/Build_Cross/test/integration/pause_test.go:33 +0x2b
testing.tRunner(0xc000aa9180, 0x3b4f8a8)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 639 [syscall, 3 minutes]:
syscall.Syscall(0xc001577b68?, 0x0?, 0x9d043b?, 0x1000000000000?, 0x1e?)
	/usr/local/go/src/runtime/syscall_windows.go:457 +0x29
syscall.WaitForSingleObject(0x578, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1149 +0x5a
os.(*Process).wait(0xc000866480?)
	/usr/local/go/src/os/exec_windows.go:28 +0xe6
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc000866480)
	/usr/local/go/src/os/exec/exec.go:922 +0x45
os/exec.(*Cmd).Run(0xc000866480)
	/usr/local/go/src/os/exec/exec.go:626 +0x2d
k8s.io/minikube/test/integration.Run(0xc0018781c0, 0xc000866480)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestCertExpiration(0xc0018781c0)
	/home/jenkins/workspace/Build_Cross/test/integration/cert_options_test.go:123 +0x2bd
testing.tRunner(0xc0018781c0, 0x3b4f7a0)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 1545 [chan receive, 11 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1753 +0x486
testing.tRunner(0xc00150c380, 0xc000abc120)
	/usr/local/go/src/testing/testing.go:1798 +0x104
created by testing.(*T).Run in goroutine 1525
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 1748 [select, 5 minutes]:
os/exec.(*Cmd).watchCtx(0xc000780780, 0xc00071e0e0)
	/usr/local/go/src/os/exec/exec.go:789 +0xb2
created by os/exec.(*Cmd).Start in goroutine 1732
	/usr/local/go/src/os/exec/exec.go:775 +0x989

                                                
                                                
goroutine 1552 [chan receive, 11 minutes]:
testing.(*testState).waitParallel(0xc0004be050)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc00150cfc0)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc00150cfc0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00150cfc0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc00150cfc0, 0xc0006bc580)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 1545
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 1551 [chan receive, 11 minutes]:
testing.(*testState).waitParallel(0xc0004be050)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc00150ce00)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc00150ce00)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00150ce00)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc00150ce00, 0xc0006bc480)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 1545
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 1549 [chan receive, 11 minutes]:
testing.(*testState).waitParallel(0xc0004be050)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc00150ca80)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc00150ca80)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00150ca80)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc00150ca80, 0xc0006bc200)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 1545
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 1747 [syscall, 5 minutes]:
syscall.Syscall6(0x1a1fffc5538?, 0x1a1ff4c0a38?, 0x200?, 0xc000800008?, 0xc00039e000?, 0xc00082fbf0?, 0x8e8659?, 0xc00088ced0?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x7e4, {0xc00039e000?, 0x200, 0x93df1f?}, 0x200?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1020 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:451
syscall.Read(0xc000a2efc8?, {0xc00039e000?, 0x0?, 0x25a00000000?})
	/usr/local/go/src/syscall/syscall_windows.go:430 +0x2d
internal/poll.(*FD).Read(0xc000a2efc8, {0xc00039e000, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00098a1b8, {0xc00039e000?, 0x886d3f?, 0x2c95420?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc0013d4510, {0x3ea7ce0, 0xc000412040})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3ea7e60, 0xc0013d4510}, {0x3ea7ce0, 0xc000412040}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x8f0a37?, {0x3ea7e60, 0xc0013d4510})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x58cf2a0?, {0x3ea7e60?, 0xc0013d4510?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x3ea7e60, 0xc0013d4510}, {0x3ea7dc0, 0xc00098a1b8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0x3b4f888?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1732
	/usr/local/go/src/os/exec/exec.go:748 +0x9c5

                                                
                                                
goroutine 1546 [chan receive, 11 minutes]:
testing.(*testState).waitParallel(0xc0004be050)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc00150c540)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc00150c540)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00150c540)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc00150c540, 0xc0006bc080)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 1545
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 1550 [chan receive, 11 minutes]:
testing.(*testState).waitParallel(0xc0004be050)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc00150cc40)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc00150cc40)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00150cc40)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc00150cc40, 0xc0006bc400)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 1545
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 1571 [syscall]:
syscall.Syscall(0xc00179fa90?, 0x0?, 0x9d043b?, 0x1000000000000?, 0x1e?)
	/usr/local/go/src/runtime/syscall_windows.go:457 +0x29
syscall.WaitForSingleObject(0x774, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1149 +0x5a
os.(*Process).wait(0xc000780180?)
	/usr/local/go/src/os/exec_windows.go:28 +0xe6
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc000780180)
	/usr/local/go/src/os/exec/exec.go:922 +0x45
os/exec.(*Cmd).Run(0xc000780180)
	/usr/local/go/src/os/exec/exec.go:626 +0x2d
k8s.io/minikube/test/integration.Run(0xc000020380, 0xc000780180)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade(0xc000020380)
	/home/jenkins/workspace/Build_Cross/test/integration/version_upgrade_test.go:130 +0x735
testing.tRunner(0xc000020380, 0x3b4f8b8)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 1683 [select, 7 minutes]:
os/exec.(*Cmd).watchCtx(0xc001848480, 0xc00071e690)
	/usr/local/go/src/os/exec/exec.go:789 +0xb2
created by os/exec.(*Cmd).Start in goroutine 1573
	/usr/local/go/src/os/exec/exec.go:775 +0x989

                                                
                                                
goroutine 1573 [syscall, 7 minutes]:
syscall.Syscall(0xc00008b988?, 0x0?, 0x9d043b?, 0x1000000000000?, 0x1e?)
	/usr/local/go/src/runtime/syscall_windows.go:457 +0x29
syscall.WaitForSingleObject(0x510, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1149 +0x5a
os.(*Process).wait(0xc001848480?)
	/usr/local/go/src/os/exec_windows.go:28 +0xe6
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc001848480)
	/usr/local/go/src/os/exec/exec.go:922 +0x45
os/exec.(*Cmd).Run(0xc001848480)
	/usr/local/go/src/os/exec/exec.go:626 +0x2d
k8s.io/minikube/test/integration.Run(0xc000020700, 0xc001848480)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestKubernetesUpgrade(0xc000020700)
	/home/jenkins/workspace/Build_Cross/test/integration/version_upgrade_test.go:222 +0x365
testing.tRunner(0xc000020700, 0x3b4f858)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 1521 [chan receive, 11 minutes]:
testing.(*testState).waitParallel(0xc0004be050)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc000020000)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000020000)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop(0xc000020000)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:44 +0x18
testing.tRunner(0xc000020000, 0x3b4f8d8)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 1553 [chan receive, 11 minutes]:
testing.(*testState).waitParallel(0xc0004be050)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc00150d180)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc00150d180)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00150d180)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc00150d180, 0xc0006bc600)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 1545
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 1665 [syscall, 3 minutes]:
syscall.Syscall6(0x1a1c4daee58?, 0x1a1ff4c05a0?, 0x800?, 0xc001400008?, 0xc00151f800?, 0xc000503bf0?, 0x8e8659?, 0xc000503bf8?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x2cc, {0xc00151fa4f?, 0x5b1, 0x93df1f?}, 0x800?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1020 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:451
syscall.Read(0xc000a2ed88?, {0xc00151fa4f?, 0x0?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:430 +0x2d
internal/poll.(*FD).Read(0xc000a2ed88, {0xc00151fa4f, 0x5b1, 0x5b1})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000412478, {0xc00151fa4f?, 0x886d3f?, 0x2c95420?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc00170b2f0, {0x3ea7ce0, 0xc00090c068})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3ea7e60, 0xc00170b2f0}, {0x3ea7ce0, 0xc00090c068}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000503e90?, {0x3ea7e60, 0xc00170b2f0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x58cf2a0?, {0x3ea7e60?, 0xc00170b2f0?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x3ea7e60, 0xc00170b2f0}, {0x3ea7dc0, 0xc000412478}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc00173c930?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1573
	/usr/local/go/src/os/exec/exec.go:748 +0x9c5

                                                
                                                
goroutine 1682 [syscall]:
syscall.Syscall6(0x1a1ff4c0ed0?, 0x8000?, 0x4000?, 0xc00061a008?, 0xc001346000?, 0xc0013f9bf0?, 0x8e8665?, 0xc000600808?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x748, {0xc00134d639?, 0x9c7, 0x93df1f?}, 0x8000?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1020 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:451
syscall.Read(0xc000a2f208?, {0xc00134d639?, 0x0?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:430 +0x2d
internal/poll.(*FD).Read(0xc000a2f208, {0xc00134d639, 0x9c7, 0x9c7})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0004125b8, {0xc00134d639?, 0x2034?, 0x2034?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc00170b320, {0x3ea7ce0, 0xc0006b41f8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3ea7e60, 0xc00170b320}, {0x3ea7ce0, 0xc0006b41f8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x1?, {0x3ea7e60, 0xc00170b320})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x58cf2a0?, {0x3ea7e60?, 0xc00170b320?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x3ea7e60, 0xc00170b320}, {0x3ea7dc0, 0xc0004125b8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc000a92388?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1573
	/usr/local/go/src/os/exec/exec.go:748 +0x9c5

                                                
                                                
goroutine 1735 [select, 3 minutes]:
os/exec.(*Cmd).watchCtx(0xc000866480, 0xc000078930)
	/usr/local/go/src/os/exec/exec.go:789 +0xb2
created by os/exec.(*Cmd).Start in goroutine 639
	/usr/local/go/src/os/exec/exec.go:775 +0x989

                                                
                                                
goroutine 1733 [syscall, 3 minutes]:
syscall.Syscall6(0x1a1c4daee58?, 0x1a1ff4c0ed0?, 0x800?, 0xc00160f008?, 0xc00151e800?, 0xc000773bf0?, 0x8e8659?, 0xc0015e6fc0?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x56c, {0xc00151ea07?, 0x5f9, 0x93df1f?}, 0x800?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1020 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:451
syscall.Read(0xc0006b8fc8?, {0xc00151ea07?, 0x0?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:430 +0x2d
internal/poll.(*FD).Read(0xc0006b8fc8, {0xc00151ea07, 0x5f9, 0x5f9})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00098a088, {0xc00151ea07?, 0x886d3f?, 0x2c95420?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc00168e120, {0x3ea7ce0, 0xc000412c88})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3ea7e60, 0xc00168e120}, {0x3ea7ce0, 0xc000412c88}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3ea7e60, 0xc00168e120})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x58cf2a0?, {0x3ea7e60?, 0xc00168e120?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x3ea7e60, 0xc00168e120}, {0x3ea7dc0, 0xc00098a088}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc000956ec0?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 639
	/usr/local/go/src/os/exec/exec.go:748 +0x9c5

                                                
                                                
goroutine 1734 [syscall, 3 minutes]:
syscall.Syscall6(0x1a1fffc5538?, 0x1a1ff4c0ed0?, 0x200?, 0xc000480008?, 0xc00039e600?, 0xc000775bf0?, 0x8e8659?, 0x35?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x520, {0xc00039e600?, 0x200, 0x93df1f?}, 0x200?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1020 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:451
syscall.Read(0xc0006b9448?, {0xc00039e600?, 0x0?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:430 +0x2d
internal/poll.(*FD).Read(0xc0006b9448, {0xc00039e600, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00098a0a0, {0xc00039e600?, 0x886d3f?, 0x2c95420?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc00168e150, {0x3ea7ce0, 0xc00090c060})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3ea7e60, 0xc00168e150}, {0x3ea7ce0, 0xc00090c060}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3ea7e60, 0xc00168e150})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x58cf2a0?, {0x3ea7e60?, 0xc00168e150?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x3ea7e60, 0xc00168e150}, {0x3ea7dc0, 0xc00098a0a0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc0017084d0?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 639
	/usr/local/go/src/os/exec/exec.go:748 +0x9c5

                                                
                                                
goroutine 1756 [syscall]:
syscall.Syscall6(0x1a1c4a60b48?, 0x1a1ff4c0ed0?, 0x4000?, 0xc000600808?, 0xc00090e000?, 0xc000ab1bf0?, 0x8e8659?, 0xc0006b4120?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x4e4, {0xc0009100d6?, 0x1f2a, 0x93df1f?}, 0x4000?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1020 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:451
syscall.Read(0xc001492908?, {0xc0009100d6?, 0x0?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:430 +0x2d
internal/poll.(*FD).Read(0xc001492908, {0xc0009100d6, 0x1f2a, 0x1f2a})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0006b40f0, {0xc0009100d6?, 0x886d3f?, 0x2c95420?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc0013d40c0, {0x3ea7ce0, 0xc000412118})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3ea7e60, 0xc0013d40c0}, {0x3ea7ce0, 0xc000412118}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3ea7e60, 0xc0013d40c0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x58cf2a0?, {0x3ea7e60?, 0xc0013d40c0?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x3ea7e60, 0xc0013d40c0}, {0x3ea7dc0, 0xc0006b40f0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0x3b4f8b8?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1571
	/usr/local/go/src/os/exec/exec.go:748 +0x9c5

                                                
                                                
goroutine 1757 [select]:
os/exec.(*Cmd).watchCtx(0xc000780180, 0xc00071e1c0)
	/usr/local/go/src/os/exec/exec.go:789 +0xb2
created by os/exec.(*Cmd).Start in goroutine 1571
	/usr/local/go/src/os/exec/exec.go:775 +0x989

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (303.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-159600 --driver=hyperv
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-159600 --driver=hyperv: exit status 1 (4m59.7283013s)

                                                
                                                
-- stdout --
	* [NoKubernetes-159600] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=20501
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "NoKubernetes-159600" primary control-plane node in "NoKubernetes-159600" cluster
	* Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...

                                                
                                                
-- /stdout --
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-159600 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-159600 -n NoKubernetes-159600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-159600 -n NoKubernetes-159600: exit status 7 (3.7168383s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0409 01:39:02.130315    8780 main.go:137] libmachine: [stderr =====>] : Hyper-V\Get-VM : Hyper-V was unable to find a virtual machine with name "NoKubernetes-159600".
	At line:1 char:3
	+ ( Hyper-V\Get-VM NoKubernetes-159600 ).state
	+   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
	    + CategoryInfo          : InvalidArgument: (NoKubernetes-159600:String) [Get-VM], VirtualizationException
	    + FullyQualifiedErrorId : InvalidParameter,Microsoft.HyperV.PowerShell.Commands.GetVM
	 
	

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-159600" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (303.45s)

                                                
                                    

Test pass (100/140)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 14.81
4 TestDownloadOnly/v1.20.0/preload-exists 0.08
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.34
9 TestDownloadOnly/v1.20.0/DeleteAll 0.9
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.9
12 TestDownloadOnly/v1.32.2/json-events 14.43
13 TestDownloadOnly/v1.32.2/preload-exists 0
16 TestDownloadOnly/v1.32.2/kubectl 0
17 TestDownloadOnly/v1.32.2/LogsDuration 0.46
18 TestDownloadOnly/v1.32.2/DeleteAll 0.7
19 TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds 0.76
21 TestBinaryMirror 9.42
22 TestOffline 250.5
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.29
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.27
27 TestAddons/Setup 434.54
29 TestAddons/serial/Volcano 63.96
31 TestAddons/serial/GCPAuth/Namespaces 0.34
32 TestAddons/serial/GCPAuth/FakeCredentials 9.63
35 TestAddons/parallel/Registry 35.83
36 TestAddons/parallel/Ingress 68.26
37 TestAddons/parallel/InspektorGadget 27.13
38 TestAddons/parallel/MetricsServer 22.06
40 TestAddons/parallel/CSI 77.28
41 TestAddons/parallel/Headlamp 41.71
42 TestAddons/parallel/CloudSpanner 22.35
43 TestAddons/parallel/LocalPath 86.65
44 TestAddons/parallel/NvidiaDevicePlugin 22.89
45 TestAddons/parallel/Yakd 26.72
47 TestAddons/StoppedEnableDisable 52.94
51 TestForceSystemdFlag 392.41
52 TestForceSystemdEnv 541.22
59 TestErrorSpam/start 17.26
60 TestErrorSpam/status 36.79
61 TestErrorSpam/pause 22.67
62 TestErrorSpam/unpause 23.39
63 TestErrorSpam/stop 62.65
66 TestFunctional/serial/CopySyncFile 0.05
67 TestFunctional/serial/StartWithProxy 199.33
68 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/KubeContext 0.13
74 TestFunctional/serial/CacheCmd/cache/add_remote 349.03
75 TestFunctional/serial/CacheCmd/cache/add_local 60.73
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.27
77 TestFunctional/serial/CacheCmd/cache/list 0.27
80 TestFunctional/serial/CacheCmd/cache/delete 0.51
87 TestFunctional/delete_echo-server_images 0.02
88 TestFunctional/delete_my-image_image 0.01
89 TestFunctional/delete_minikube_cached_images 0.01
94 TestMultiControlPlane/serial/StartCluster 710.53
95 TestMultiControlPlane/serial/DeployApp 14.4
97 TestMultiControlPlane/serial/AddWorkerNode 265.22
98 TestMultiControlPlane/serial/NodeLabels 0.18
99 TestMultiControlPlane/serial/HAppyAfterClusterStart 48.7
100 TestMultiControlPlane/serial/CopyFile 642.15
104 TestImageBuild/serial/Setup 194.98
105 TestImageBuild/serial/NormalBuild 10.73
106 TestImageBuild/serial/BuildWithBuildArg 8.94
107 TestImageBuild/serial/BuildWithDockerIgnore 8.17
108 TestImageBuild/serial/BuildWithSpecifiedDockerfile 8.29
112 TestJSONOutput/start/Command 202.02
113 TestJSONOutput/start/Audit 0
115 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
116 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
118 TestJSONOutput/pause/Command 7.98
119 TestJSONOutput/pause/Audit 0
121 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
122 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
124 TestJSONOutput/unpause/Command 7.65
125 TestJSONOutput/unpause/Audit 0
127 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
128 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
130 TestJSONOutput/stop/Command 34.47
131 TestJSONOutput/stop/Audit 0
133 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
134 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
135 TestErrorJSONOutput 0.96
140 TestMainNoArgs 0.24
141 TestMinikubeProfile 532.41
144 TestMountStart/serial/StartWithMountFirst 154.29
145 TestMountStart/serial/VerifyMountFirst 9.43
146 TestMountStart/serial/StartWithMountSecond 153.9
147 TestMountStart/serial/VerifyMountSecond 9.6
148 TestMountStart/serial/DeleteFirst 30.67
149 TestMountStart/serial/VerifyMountPostDelete 9.41
150 TestMountStart/serial/Stop 26.34
151 TestMountStart/serial/RestartStopped 118.38
152 TestMountStart/serial/VerifyMountPostStop 9.47
155 TestMultiNode/serial/FreshStart2Nodes 429.95
156 TestMultiNode/serial/DeployApp2Nodes 10.11
158 TestMultiNode/serial/AddNode 241.47
159 TestMultiNode/serial/MultiNodeLabels 0.18
160 TestMultiNode/serial/ProfileList 35.65
161 TestMultiNode/serial/CopyFile 359.34
162 TestMultiNode/serial/StopNode 76.94
163 TestMultiNode/serial/StartAfterStop 194.87
168 TestPreload 574.41
169 TestScheduledStopWindows 329.63
179 TestNoKubernetes/serial/StartNoK8sWithVersion 0.39
x
+
TestDownloadOnly/v1.20.0/json-events (14.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-980500 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-980500 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv: (14.8093452s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (14.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0408 22:45:26.254547    9864 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0408 22:45:26.331585    9864 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-980500
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-980500: exit status 85 (344.2658ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-980500 | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:45 UTC |          |
	|         | -p download-only-980500        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/08 22:45:11
	Running on machine: minikube6
	Binary: Built with gc go1.24.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 22:45:11.550990    4048 out.go:345] Setting OutFile to fd 720 ...
	I0408 22:45:11.627185    4048 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 22:45:11.627185    4048 out.go:358] Setting ErrFile to fd 724...
	I0408 22:45:11.627185    4048 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0408 22:45:11.640079    4048 root.go:314] Error reading config file at C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0408 22:45:11.650098    4048 out.go:352] Setting JSON to true
	I0408 22:45:11.652889    4048 start.go:129] hostinfo: {"hostname":"minikube6","uptime":9309,"bootTime":1744143002,"procs":179,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5679 Build 19045.5679","kernelVersion":"10.0.19045.5679 Build 19045.5679","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0408 22:45:11.652889    4048 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 22:45:11.661975    4048 out.go:97] [download-only-980500] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
	I0408 22:45:11.661975    4048 notify.go:220] Checking for updates...
	W0408 22:45:11.661975    4048 preload.go:293] Failed to list preload files: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0408 22:45:11.665085    4048 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0408 22:45:11.667796    4048 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0408 22:45:11.671102    4048 out.go:169] MINIKUBE_LOCATION=20501
	I0408 22:45:11.674511    4048 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0408 22:45:11.680450    4048 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0408 22:45:11.681440    4048 driver.go:404] Setting default libvirt URI to qemu:///system
	I0408 22:45:16.910864    4048 out.go:97] Using the hyperv driver based on user configuration
	I0408 22:45:16.910966    4048 start.go:297] selected driver: hyperv
	I0408 22:45:16.911089    4048 start.go:901] validating driver "hyperv" against <nil>
	I0408 22:45:16.911409    4048 start_flags.go:311] no existing cluster config was found, will generate one from the flags 
	I0408 22:45:16.965227    4048 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0408 22:45:16.966113    4048 start_flags.go:957] Wait components to verify : map[apiserver:true system_pods:true]
	I0408 22:45:16.966238    4048 cni.go:84] Creating CNI manager for ""
	I0408 22:45:16.966238    4048 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0408 22:45:16.966238    4048 start.go:340] cluster config:
	{Name:download-only-980500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-980500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 22:45:16.967726    4048 iso.go:125] acquiring lock: {Name:mk49322cc4182124f5e9cd1631076166a7ff463d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 22:45:16.970843    4048 out.go:97] Downloading VM boot image ...
	I0408 22:45:16.970843    4048 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.35.0-amd64.iso
	I0408 22:45:20.876232    4048 out.go:97] Starting "download-only-980500" primary control-plane node in "download-only-980500" cluster
	I0408 22:45:20.876232    4048 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0408 22:45:20.975497    4048 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0408 22:45:20.975497    4048 cache.go:56] Caching tarball of preloaded images
	I0408 22:45:20.976050    4048 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0408 22:45:20.979361    4048 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0408 22:45:20.979361    4048 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0408 22:45:21.079572    4048 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-980500 host does not exist
	  To start a cluster, run: "minikube start -p download-only-980500"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.9s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.90s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.9s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-980500
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.90s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/json-events (14.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-736500 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-736500 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=docker --driver=hyperv: (14.431405s)
--- PASS: TestDownloadOnly/v1.32.2/json-events (14.43s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/preload-exists
I0408 22:45:42.908600    9864 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
I0408 22:45:42.908600    9864 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/kubectl
--- PASS: TestDownloadOnly/v1.32.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/LogsDuration (0.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-736500
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-736500: exit status 85 (459.5636ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-980500 | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:45 UTC |                     |
	|         | -p download-only-980500        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:45 UTC | 08 Apr 25 22:45 UTC |
	| delete  | -p download-only-980500        | download-only-980500 | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:45 UTC | 08 Apr 25 22:45 UTC |
	| start   | -o=json --download-only        | download-only-736500 | minikube6\jenkins | v1.35.0 | 08 Apr 25 22:45 UTC |                     |
	|         | -p download-only-736500        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.32.2   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/08 22:45:28
	Running on machine: minikube6
	Binary: Built with gc go1.24.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0408 22:45:28.595795     120 out.go:345] Setting OutFile to fd 736 ...
	I0408 22:45:28.682654     120 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 22:45:28.682743     120 out.go:358] Setting ErrFile to fd 752...
	I0408 22:45:28.682743     120 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0408 22:45:28.700865     120 out.go:352] Setting JSON to true
	I0408 22:45:28.705307     120 start.go:129] hostinfo: {"hostname":"minikube6","uptime":9326,"bootTime":1744143002,"procs":183,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5679 Build 19045.5679","kernelVersion":"10.0.19045.5679 Build 19045.5679","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0408 22:45:28.705437     120 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0408 22:45:28.713418     120 out.go:97] [download-only-736500] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
	I0408 22:45:28.714291     120 notify.go:220] Checking for updates...
	I0408 22:45:28.715625     120 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0408 22:45:28.718909     120 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0408 22:45:28.721269     120 out.go:169] MINIKUBE_LOCATION=20501
	I0408 22:45:28.724257     120 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0408 22:45:28.730402     120 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0408 22:45:28.731544     120 driver.go:404] Setting default libvirt URI to qemu:///system
	I0408 22:45:34.007075     120 out.go:97] Using the hyperv driver based on user configuration
	I0408 22:45:34.007075     120 start.go:297] selected driver: hyperv
	I0408 22:45:34.007160     120 start.go:901] validating driver "hyperv" against <nil>
	I0408 22:45:34.007307     120 start_flags.go:311] no existing cluster config was found, will generate one from the flags 
	I0408 22:45:34.056245     120 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0408 22:45:34.057120     120 start_flags.go:957] Wait components to verify : map[apiserver:true system_pods:true]
	I0408 22:45:34.058178     120 cni.go:84] Creating CNI manager for ""
	I0408 22:45:34.058178     120 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0408 22:45:34.058178     120 start_flags.go:320] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0408 22:45:34.058178     120 start.go:340] cluster config:
	{Name:download-only-736500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:download-only-736500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0408 22:45:34.058178     120 iso.go:125] acquiring lock: {Name:mk49322cc4182124f5e9cd1631076166a7ff463d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0408 22:45:34.062318     120 out.go:97] Starting "download-only-736500" primary control-plane node in "download-only-736500" cluster
	I0408 22:45:34.062318     120 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0408 22:45:34.128104     120 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
	I0408 22:45:34.128258     120 cache.go:56] Caching tarball of preloaded images
	I0408 22:45:34.128377     120 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0408 22:45:34.132305     120 out.go:97] Downloading Kubernetes v1.32.2 preload ...
	I0408 22:45:34.132305     120 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 ...
	I0408 22:45:34.212640     120 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4?checksum=md5:c3fdd273d8c9002513e1c87be8fe9ffc -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
	I0408 22:45:38.111332     120 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 ...
	I0408 22:45:38.111994     120 preload.go:254] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-736500 host does not exist
	  To start a cluster, run: "minikube start -p download-only-736500"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.2/LogsDuration (0.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAll (0.7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
--- PASS: TestDownloadOnly/v1.32.2/DeleteAll (0.70s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-736500
--- PASS: TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.76s)

                                                
                                    
x
+
TestBinaryMirror (9.42s)

                                                
                                                
=== RUN   TestBinaryMirror
I0408 22:45:46.177419    9864 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/windows/amd64/kubectl.exe.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-831900 --alsologtostderr --binary-mirror http://127.0.0.1:53352 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-831900 --alsologtostderr --binary-mirror http://127.0.0.1:53352 --driver=hyperv: (8.6868009s)
helpers_test.go:175: Cleaning up "binary-mirror-831900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-831900
--- PASS: TestBinaryMirror (9.42s)

                                                
                                    
x
+
TestOffline (250.5s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-159600 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-159600 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (3m24.2361816s)
helpers_test.go:175: Cleaning up "offline-docker-159600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-159600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-159600: (46.2578735s)
--- PASS: TestOffline (250.50s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.29s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-582000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-582000: exit status 85 (286.0462ms)

                                                
                                                
-- stdout --
	* Profile "addons-582000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-582000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.29s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.27s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-582000
addons_test.go:950: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-582000: exit status 85 (272.6086ms)

                                                
                                                
-- stdout --
	* Profile "addons-582000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-582000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.27s)

                                                
                                    
x
+
TestAddons/Setup (434.54s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-582000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=hyperv --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-582000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=hyperv --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (7m14.5373797s)
--- PASS: TestAddons/Setup (434.54s)

                                                
                                    
x
+
TestAddons/serial/Volcano (63.96s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:823: volcano-controller stabilized in 42.675ms
addons_test.go:815: volcano-admission stabilized in 47.099ms
addons_test.go:807: volcano-scheduler stabilized in 47.4999ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-75fdd99bcf-d8kfn" [a113c050-d161-450a-942a-52a5116f3807] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.0064891s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-75d8f6b5c-cbscz" [703ad73d-0b6d-40a2-9186-748908141ca8] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.0058707s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-86bdc5c9c-7nswm" [a1c03018-3a21-45bf-be1e-7c8cf9085f23] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.005794s
addons_test.go:842: (dbg) Run:  kubectl --context addons-582000 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-582000 create -f testdata\vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-582000 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [a9f3ac68-0dbe-437b-93c5-395eea9079b7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [a9f3ac68-0dbe-437b-93c5-395eea9079b7] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 21.0039567s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-582000 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-582000 addons disable volcano --alsologtostderr -v=1: (26.0035467s)
--- PASS: TestAddons/serial/Volcano (63.96s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.34s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-582000 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-582000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.34s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.63s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-582000 create -f testdata\busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-582000 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f6a6ccf8-7c0c-424e-9576-4f01fdb92aea] Pending
helpers_test.go:344: "busybox" [f6a6ccf8-7c0c-424e-9576-4f01fdb92aea] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.0056013s
addons_test.go:633: (dbg) Run:  kubectl --context addons-582000 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-582000 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-582000 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-582000 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.63s)

                                                
                                    
x
+
TestAddons/parallel/Registry (35.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 7.479ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-nwgdc" [1fd4b576-0a53-46a4-bfa2-8f65017ca5f9] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0039548s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-k8scx" [996bd039-858e-470c-b580-97fdec3e07d5] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0055867s
addons_test.go:331: (dbg) Run:  kubectl --context addons-582000 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-582000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-582000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.608757s)
addons_test.go:350: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-582000 ip
addons_test.go:350: (dbg) Done: out/minikube-windows-amd64.exe -p addons-582000 ip: (3.0691023s)
2025/04/08 22:55:12 [DEBUG] GET http://192.168.121.174:5000
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-582000 addons disable registry --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-582000 addons disable registry --alsologtostderr -v=1: (16.8541953s)
--- PASS: TestAddons/parallel/Registry (35.83s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (68.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-582000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-582000 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-582000 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c625ac31-b863-4188-8bf1-16b76c8fde7c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c625ac31-b863-4188-8bf1-16b76c8fde7c] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.0071172s
I0408 22:56:06.936468    9864 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-582000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe -p addons-582000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (10.0426126s)
addons_test.go:286: (dbg) Run:  kubectl --context addons-582000 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-582000 ip
addons_test.go:291: (dbg) Done: out/minikube-windows-amd64.exe -p addons-582000 ip: (2.8373103s)
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.121.174
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-582000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-582000 addons disable ingress-dns --alsologtostderr -v=1: (16.633968s)
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-582000 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-582000 addons disable ingress --alsologtostderr -v=1: (22.1948616s)
--- PASS: TestAddons/parallel/Ingress (68.26s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (27.13s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-6w7lv" [d1f48894-d888-4bd5-98f1-96f06bac2c26] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.0085632s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-582000 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-582000 addons disable inspektor-gadget --alsologtostderr -v=1: (21.1215729s)
--- PASS: TestAddons/parallel/InspektorGadget (27.13s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (22.06s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 13.5504ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-rwzrr" [476abf78-539c-44b3-8c86-a88a3864fa7a] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.0151508s
addons_test.go:402: (dbg) Run:  kubectl --context addons-582000 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-582000 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-582000 addons disable metrics-server --alsologtostderr -v=1: (15.8042462s)
--- PASS: TestAddons/parallel/MetricsServer (22.06s)

                                                
                                    
x
+
TestAddons/parallel/CSI (77.28s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0408 22:55:35.604186    9864 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0408 22:55:35.613299    9864 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0408 22:55:35.613299    9864 kapi.go:107] duration metric: took 9.1132ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 9.1132ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-582000 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-582000 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [568dcc72-bc3a-459a-a49d-4ce32deeedf6] Pending
helpers_test.go:344: "task-pv-pod" [568dcc72-bc3a-459a-a49d-4ce32deeedf6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [568dcc72-bc3a-459a-a49d-4ce32deeedf6] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.0117419s
addons_test.go:511: (dbg) Run:  kubectl --context addons-582000 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-582000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-582000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-582000 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-582000 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-582000 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-582000 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [97d3549e-15fb-4f25-98af-a5fccd0e3e22] Pending
helpers_test.go:344: "task-pv-pod-restore" [97d3549e-15fb-4f25-98af-a5fccd0e3e22] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [97d3549e-15fb-4f25-98af-a5fccd0e3e22] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.0065248s
addons_test.go:553: (dbg) Run:  kubectl --context addons-582000 delete pod task-pv-pod-restore
addons_test.go:553: (dbg) Done: kubectl --context addons-582000 delete pod task-pv-pod-restore: (1.758413s)
addons_test.go:557: (dbg) Run:  kubectl --context addons-582000 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-582000 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-582000 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-582000 addons disable volumesnapshots --alsologtostderr -v=1: (16.4987457s)
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-582000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-582000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (21.7470037s)
--- PASS: TestAddons/parallel/CSI (77.28s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (41.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-582000 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-582000 --alsologtostderr -v=1: (16.513736s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-v2jbv" [faee85c6-8b52-4ef5-8e6d-0b6331c0b8b2] Pending
helpers_test.go:344: "headlamp-5d4b5d7bd6-v2jbv" [faee85c6-8b52-4ef5-8e6d-0b6331c0b8b2] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-v2jbv" [faee85c6-8b52-4ef5-8e6d-0b6331c0b8b2] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-v2jbv" [faee85c6-8b52-4ef5-8e6d-0b6331c0b8b2] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 17.0084722s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-582000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-582000 addons disable headlamp --alsologtostderr -v=1: (8.1798029s)
--- PASS: TestAddons/parallel/Headlamp (41.71s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (22.35s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7dc7f9b5b8-ww8b2" [2e343829-2986-4976-bb82-797d22161242] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.0064594s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-582000 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-582000 addons disable cloud-spanner --alsologtostderr -v=1: (16.3187321s)
--- PASS: TestAddons/parallel/CloudSpanner (22.35s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (86.65s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-582000 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-582000 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [fdf4140a-9893-4366-9a96-734284a422c5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [fdf4140a-9893-4366-9a96-734284a422c5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [fdf4140a-9893-4366-9a96-734284a422c5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.021977s
addons_test.go:906: (dbg) Run:  kubectl --context addons-582000 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-582000 ssh "cat /opt/local-path-provisioner/pvc-b0575234-bc82-4444-9a94-3c199462b7f7_default_test-pvc/file1"
addons_test.go:915: (dbg) Done: out/minikube-windows-amd64.exe -p addons-582000 ssh "cat /opt/local-path-provisioner/pvc-b0575234-bc82-4444-9a94-3c199462b7f7_default_test-pvc/file1": (11.0527547s)
addons_test.go:927: (dbg) Run:  kubectl --context addons-582000 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-582000 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-582000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-582000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (1m1.9407329s)
--- PASS: TestAddons/parallel/LocalPath (86.65s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (22.89s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-2f52l" [88086c92-04b8-4880-9d77-08a15f308e17] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.0192825s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-582000 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-582000 addons disable nvidia-device-plugin --alsologtostderr -v=1: (16.8701592s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (22.89s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (26.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-qdhwx" [cc1b91ab-fd81-417d-8faa-233b561f55ce] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.0059805s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-582000 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-582000 addons disable yakd --alsologtostderr -v=1: (21.7125997s)
--- PASS: TestAddons/parallel/Yakd (26.72s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (52.94s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-582000
addons_test.go:170: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-582000: (40.518693s)
addons_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-582000
addons_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-582000: (4.9165931s)
addons_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-582000
addons_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-582000: (4.8442585s)
addons_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-582000
addons_test.go:183: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-582000: (2.6552524s)
--- PASS: TestAddons/StoppedEnableDisable (52.94s)

                                                
                                    
x
+
TestForceSystemdFlag (392.41s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-696000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-696000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (5m42.9567452s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-696000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-696000 ssh "docker info --format {{.CgroupDriver}}": (10.1805124s)
helpers_test.go:175: Cleaning up "force-systemd-flag-696000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-696000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-696000: (39.2710619s)
--- PASS: TestForceSystemdFlag (392.41s)

                                                
                                    
x
+
TestForceSystemdEnv (541.22s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-045900 --memory=2048 --alsologtostderr -v=5 --driver=hyperv
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-045900 --memory=2048 --alsologtostderr -v=5 --driver=hyperv: (8m4.7409733s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-045900 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-045900 ssh "docker info --format {{.CgroupDriver}}": (9.9806733s)
helpers_test.go:175: Cleaning up "force-systemd-env-045900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-045900
E0409 01:43:10.562677    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-045900: (46.4979238s)
--- PASS: TestForceSystemdEnv (541.22s)

                                                
                                    
x
+
TestErrorSpam/start (17.26s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-268300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-268300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 start --dry-run: (5.5880313s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-268300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-268300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 start --dry-run: (5.8328459s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-268300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-268300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 start --dry-run: (5.8361814s)
--- PASS: TestErrorSpam/start (17.26s)

                                                
                                    
x
+
TestErrorSpam/status (36.79s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-268300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-268300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 status: (12.5906654s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-268300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-268300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 status: (12.0935727s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-268300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-268300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 status: (12.1057568s)
--- PASS: TestErrorSpam/status (36.79s)

                                                
                                    
x
+
TestErrorSpam/pause (22.67s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-268300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-268300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 pause: (7.7935034s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-268300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-268300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 pause: (7.5075522s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-268300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-268300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 pause: (7.3609788s)
--- PASS: TestErrorSpam/pause (22.67s)

                                                
                                    
x
+
TestErrorSpam/unpause (23.39s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-268300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 unpause
E0408 23:03:10.436780    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0408 23:03:10.445777    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0408 23:03:10.458338    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0408 23:03:10.480778    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0408 23:03:10.523780    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0408 23:03:10.606821    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0408 23:03:10.769169    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0408 23:03:11.092156    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0408 23:03:11.735097    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0408 23:03:13.018154    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-268300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 unpause: (7.7985258s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-268300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 unpause
E0408 23:03:15.580533    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0408 23:03:20.703254    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-268300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 unpause: (7.7696894s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-268300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-268300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 unpause: (7.8168649s)
--- PASS: TestErrorSpam/unpause (23.39s)

                                                
                                    
x
+
TestErrorSpam/stop (62.65s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-268300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 stop
E0408 23:03:30.945470    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0408 23:03:51.428333    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-268300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 stop: (40.4259403s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-268300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-268300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 stop: (11.424303s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-268300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 stop
E0408 23:04:32.391416    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-268300 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-268300 stop: (10.7991635s)
--- PASS: TestErrorSpam/stop (62.65s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9864\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.05s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (199.33s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-618200 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
E0408 23:05:54.315305    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:2251: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-618200 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (3m19.32266s)
--- PASS: TestFunctional/serial/StartWithProxy (199.33s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (349.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-618200 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-windows-amd64.exe -p functional-618200 cache add registry.k8s.io/pause:3.1: (1m48.0065538s)
functional_test.go:1066: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-618200 cache add registry.k8s.io/pause:3.3
E0408 23:18:10.448129    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0408 23:19:33.529818    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:1066: (dbg) Done: out/minikube-windows-amd64.exe -p functional-618200 cache add registry.k8s.io/pause:3.3: (2m0.5181277s)
functional_test.go:1066: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-618200 cache add registry.k8s.io/pause:latest
functional_test.go:1066: (dbg) Done: out/minikube-windows-amd64.exe -p functional-618200 cache add registry.k8s.io/pause:latest: (2m0.5008194s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (349.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (60.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-618200 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1287881752\001
functional_test.go:1094: (dbg) Done: docker build -t minikube-local-cache-test:functional-618200 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1287881752\001: (1.9286025s)
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-618200 cache add minikube-local-cache-test:functional-618200
functional_test.go:1106: (dbg) Done: out/minikube-windows-amd64.exe -p functional-618200 cache add minikube-local-cache-test:functional-618200: (58.3912201s)
functional_test.go:1111: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-618200 cache delete minikube-local-cache-test:functional-618200
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-618200
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (60.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.51s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.51s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Non-zero exit: docker rmi -f kicbase/echo-server:1.0: context deadline exceeded (0s)
functional_test.go:209: failed to remove image "kicbase/echo-server:1.0" from docker images. args "docker rmi -f kicbase/echo-server:1.0": context deadline exceeded
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-618200
functional_test.go:207: (dbg) Non-zero exit: docker rmi -f kicbase/echo-server:functional-618200: context deadline exceeded (0s)
functional_test.go:209: failed to remove image "kicbase/echo-server:functional-618200" from docker images. args "docker rmi -f kicbase/echo-server:functional-618200": context deadline exceeded
--- PASS: TestFunctional/delete_echo-server_images (0.02s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-618200
functional_test.go:215: (dbg) Non-zero exit: docker rmi -f localhost/my-image:functional-618200: context deadline exceeded (0s)
functional_test.go:217: failed to remove image my-image from docker images. args "docker rmi -f localhost/my-image:functional-618200": context deadline exceeded
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-618200
functional_test.go:223: (dbg) Non-zero exit: docker rmi -f minikube-local-cache-test:functional-618200: context deadline exceeded (0s)
functional_test.go:225: failed to remove image minikube local cache test images from docker. args "docker rmi -f minikube-local-cache-test:functional-618200": context deadline exceeded
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (710.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-061400 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv
E0408 23:48:10.473159    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0408 23:52:53.561544    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0408 23:53:10.476055    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-061400 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv: (11m13.545554s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 status -v=7 --alsologtostderr: (36.9824505s)
--- PASS: TestMultiControlPlane/serial/StartCluster (710.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (14.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-061400 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-061400 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-061400 -- rollout status deployment/busybox: (4.9998496s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-061400 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-061400 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-061400 -- exec busybox-58667487b6-8xfwm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-061400 -- exec busybox-58667487b6-8xfwm -- nslookup kubernetes.io: (2.1856497s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-061400 -- exec busybox-58667487b6-rjkqv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-061400 -- exec busybox-58667487b6-rxp4w -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-061400 -- exec busybox-58667487b6-rxp4w -- nslookup kubernetes.io: (1.8422762s)
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-061400 -- exec busybox-58667487b6-8xfwm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-061400 -- exec busybox-58667487b6-rjkqv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-061400 -- exec busybox-58667487b6-rxp4w -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-061400 -- exec busybox-58667487b6-8xfwm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-061400 -- exec busybox-58667487b6-rjkqv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-061400 -- exec busybox-58667487b6-rxp4w -- nslookup kubernetes.default.svc.cluster.local
E0408 23:58:10.479923    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/DeployApp (14.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (265.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-061400 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-061400 -v=7 --alsologtostderr: (3m36.53983s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 status -v=7 --alsologtostderr
E0409 00:03:10.484188    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 status -v=7 --alsologtostderr: (48.6850417s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (265.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-061400 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (48.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (48.6963927s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (48.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (642.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 status --output json -v=7 --alsologtostderr
ha_test.go:328: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 status --output json -v=7 --alsologtostderr: (49.0875227s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 cp testdata\cp-test.txt ha-061400:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 cp testdata\cp-test.txt ha-061400:/home/docker/cp-test.txt: (9.8754403s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400 "sudo cat /home/docker/cp-test.txt": (9.8626676s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 cp ha-061400:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2518069842\001\cp-test_ha-061400.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 cp ha-061400:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2518069842\001\cp-test_ha-061400.txt: (9.6670016s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400 "sudo cat /home/docker/cp-test.txt": (9.7300573s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 cp ha-061400:/home/docker/cp-test.txt ha-061400-m02:/home/docker/cp-test_ha-061400_ha-061400-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 cp ha-061400:/home/docker/cp-test.txt ha-061400-m02:/home/docker/cp-test_ha-061400_ha-061400-m02.txt: (16.8440117s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400 "sudo cat /home/docker/cp-test.txt": (9.6725097s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m02 "sudo cat /home/docker/cp-test_ha-061400_ha-061400-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m02 "sudo cat /home/docker/cp-test_ha-061400_ha-061400-m02.txt": (9.775806s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 cp ha-061400:/home/docker/cp-test.txt ha-061400-m03:/home/docker/cp-test_ha-061400_ha-061400-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 cp ha-061400:/home/docker/cp-test.txt ha-061400-m03:/home/docker/cp-test_ha-061400_ha-061400-m03.txt: (16.9169788s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400 "sudo cat /home/docker/cp-test.txt": (9.8606755s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m03 "sudo cat /home/docker/cp-test_ha-061400_ha-061400-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m03 "sudo cat /home/docker/cp-test_ha-061400_ha-061400-m03.txt": (9.6537464s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 cp ha-061400:/home/docker/cp-test.txt ha-061400-m04:/home/docker/cp-test_ha-061400_ha-061400-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 cp ha-061400:/home/docker/cp-test.txt ha-061400-m04:/home/docker/cp-test_ha-061400_ha-061400-m04.txt: (16.8130067s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400 "sudo cat /home/docker/cp-test.txt": (9.7543314s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m04 "sudo cat /home/docker/cp-test_ha-061400_ha-061400-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m04 "sudo cat /home/docker/cp-test_ha-061400_ha-061400-m04.txt": (9.5692793s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 cp testdata\cp-test.txt ha-061400-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 cp testdata\cp-test.txt ha-061400-m02:/home/docker/cp-test.txt: (10.0446649s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m02 "sudo cat /home/docker/cp-test.txt"
E0409 00:08:10.488365    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m02 "sudo cat /home/docker/cp-test.txt": (9.8418615s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 cp ha-061400-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2518069842\001\cp-test_ha-061400-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 cp ha-061400-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2518069842\001\cp-test_ha-061400-m02.txt: (9.762845s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m02 "sudo cat /home/docker/cp-test.txt": (9.9141886s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 cp ha-061400-m02:/home/docker/cp-test.txt ha-061400:/home/docker/cp-test_ha-061400-m02_ha-061400.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 cp ha-061400-m02:/home/docker/cp-test.txt ha-061400:/home/docker/cp-test_ha-061400-m02_ha-061400.txt: (17.2440021s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m02 "sudo cat /home/docker/cp-test.txt": (9.7054551s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400 "sudo cat /home/docker/cp-test_ha-061400-m02_ha-061400.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400 "sudo cat /home/docker/cp-test_ha-061400-m02_ha-061400.txt": (9.710269s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 cp ha-061400-m02:/home/docker/cp-test.txt ha-061400-m03:/home/docker/cp-test_ha-061400-m02_ha-061400-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 cp ha-061400-m02:/home/docker/cp-test.txt ha-061400-m03:/home/docker/cp-test_ha-061400-m02_ha-061400-m03.txt: (16.8377988s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m02 "sudo cat /home/docker/cp-test.txt"
E0409 00:09:33.577817    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m02 "sudo cat /home/docker/cp-test.txt": (9.7102455s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m03 "sudo cat /home/docker/cp-test_ha-061400-m02_ha-061400-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m03 "sudo cat /home/docker/cp-test_ha-061400-m02_ha-061400-m03.txt": (9.6097641s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 cp ha-061400-m02:/home/docker/cp-test.txt ha-061400-m04:/home/docker/cp-test_ha-061400-m02_ha-061400-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 cp ha-061400-m02:/home/docker/cp-test.txt ha-061400-m04:/home/docker/cp-test_ha-061400-m02_ha-061400-m04.txt: (16.8100029s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m02 "sudo cat /home/docker/cp-test.txt": (9.6672567s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m04 "sudo cat /home/docker/cp-test_ha-061400-m02_ha-061400-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m04 "sudo cat /home/docker/cp-test_ha-061400-m02_ha-061400-m04.txt": (9.7138554s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 cp testdata\cp-test.txt ha-061400-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 cp testdata\cp-test.txt ha-061400-m03:/home/docker/cp-test.txt: (9.6964393s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m03 "sudo cat /home/docker/cp-test.txt": (9.7365225s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 cp ha-061400-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2518069842\001\cp-test_ha-061400-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 cp ha-061400-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2518069842\001\cp-test_ha-061400-m03.txt: (9.615005s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m03 "sudo cat /home/docker/cp-test.txt": (9.7877852s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 cp ha-061400-m03:/home/docker/cp-test.txt ha-061400:/home/docker/cp-test_ha-061400-m03_ha-061400.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 cp ha-061400-m03:/home/docker/cp-test.txt ha-061400:/home/docker/cp-test_ha-061400-m03_ha-061400.txt: (17.0274842s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m03 "sudo cat /home/docker/cp-test.txt": (9.8092225s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400 "sudo cat /home/docker/cp-test_ha-061400-m03_ha-061400.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400 "sudo cat /home/docker/cp-test_ha-061400-m03_ha-061400.txt": (9.7890149s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 cp ha-061400-m03:/home/docker/cp-test.txt ha-061400-m02:/home/docker/cp-test_ha-061400-m03_ha-061400-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 cp ha-061400-m03:/home/docker/cp-test.txt ha-061400-m02:/home/docker/cp-test_ha-061400-m03_ha-061400-m02.txt: (16.9403471s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m03 "sudo cat /home/docker/cp-test.txt": (9.6907604s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m02 "sudo cat /home/docker/cp-test_ha-061400-m03_ha-061400-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m02 "sudo cat /home/docker/cp-test_ha-061400-m03_ha-061400-m02.txt": (9.7280734s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 cp ha-061400-m03:/home/docker/cp-test.txt ha-061400-m04:/home/docker/cp-test_ha-061400-m03_ha-061400-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 cp ha-061400-m03:/home/docker/cp-test.txt ha-061400-m04:/home/docker/cp-test_ha-061400-m03_ha-061400-m04.txt: (16.9118466s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m03 "sudo cat /home/docker/cp-test.txt": (9.8373602s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m04 "sudo cat /home/docker/cp-test_ha-061400-m03_ha-061400-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m04 "sudo cat /home/docker/cp-test_ha-061400-m03_ha-061400-m04.txt": (9.6868906s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 cp testdata\cp-test.txt ha-061400-m04:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 cp testdata\cp-test.txt ha-061400-m04:/home/docker/cp-test.txt: (9.6151585s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m04 "sudo cat /home/docker/cp-test.txt": (9.7991049s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 cp ha-061400-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2518069842\001\cp-test_ha-061400-m04.txt
E0409 00:13:10.491873    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 cp ha-061400-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2518069842\001\cp-test_ha-061400-m04.txt: (9.6991517s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m04 "sudo cat /home/docker/cp-test.txt": (9.6824461s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 cp ha-061400-m04:/home/docker/cp-test.txt ha-061400:/home/docker/cp-test_ha-061400-m04_ha-061400.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 cp ha-061400-m04:/home/docker/cp-test.txt ha-061400:/home/docker/cp-test_ha-061400-m04_ha-061400.txt: (17.1084562s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m04 "sudo cat /home/docker/cp-test.txt": (9.8067974s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400 "sudo cat /home/docker/cp-test_ha-061400-m04_ha-061400.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400 "sudo cat /home/docker/cp-test_ha-061400-m04_ha-061400.txt": (9.9234512s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 cp ha-061400-m04:/home/docker/cp-test.txt ha-061400-m02:/home/docker/cp-test_ha-061400-m04_ha-061400-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 cp ha-061400-m04:/home/docker/cp-test.txt ha-061400-m02:/home/docker/cp-test_ha-061400-m04_ha-061400-m02.txt: (16.9808453s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m04 "sudo cat /home/docker/cp-test.txt": (9.7032569s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m02 "sudo cat /home/docker/cp-test_ha-061400-m04_ha-061400-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m02 "sudo cat /home/docker/cp-test_ha-061400-m04_ha-061400-m02.txt": (9.7165346s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 cp ha-061400-m04:/home/docker/cp-test.txt ha-061400-m03:/home/docker/cp-test_ha-061400-m04_ha-061400-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 cp ha-061400-m04:/home/docker/cp-test.txt ha-061400-m03:/home/docker/cp-test_ha-061400-m04_ha-061400-m03.txt: (16.7898835s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m04 "sudo cat /home/docker/cp-test.txt": (9.6528353s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m03 "sudo cat /home/docker/cp-test_ha-061400-m04_ha-061400-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-061400 ssh -n ha-061400-m03 "sudo cat /home/docker/cp-test_ha-061400-m04_ha-061400-m03.txt": (9.7268705s)
--- PASS: TestMultiControlPlane/serial/CopyFile (642.15s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (194.98s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-054000 --driver=hyperv
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-054000 --driver=hyperv: (3m14.9770986s)
--- PASS: TestImageBuild/serial/Setup (194.98s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (10.73s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-054000
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-054000: (10.7273708s)
--- PASS: TestImageBuild/serial/NormalBuild (10.73s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (8.94s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-054000
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-054000: (8.9412526s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (8.94s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (8.17s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-054000
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-054000: (8.1713158s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (8.17s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (8.29s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-054000
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-054000: (8.2915806s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (8.29s)

                                                
                                    
x
+
TestJSONOutput/start/Command (202.02s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-599900 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0409 00:26:13.594430    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-599900 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (3m22.0242778s)
--- PASS: TestJSONOutput/start/Command (202.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (7.98s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-599900 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-599900 --output=json --user=testUser: (7.9801395s)
--- PASS: TestJSONOutput/pause/Command (7.98s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (7.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-599900 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-599900 --output=json --user=testUser: (7.6475749s)
--- PASS: TestJSONOutput/unpause/Command (7.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (34.47s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-599900 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-599900 --output=json --user=testUser: (34.4689331s)
--- PASS: TestJSONOutput/stop/Command (34.47s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.96s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-982900 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-982900 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (295.2596ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"25f585da-1d78-41dd-bf55-9014bda1d023","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-982900] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"568eeb11-3e72-4953-a585-4221551d718c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube6\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"469180dd-3e11-4534-9697-e3494f2e137c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b31faddb-67b5-4b20-854f-88831021fdb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"cfa48c67-ee0d-4d6f-9db7-f6c04c949ec9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20501"}}
	{"specversion":"1.0","id":"9e4f0fb7-327d-47f6-a5b4-1549ea264d8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1f8f7174-eb68-43bf-8a96-e9deafd1ce2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-982900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-982900
--- PASS: TestErrorJSONOutput (0.96s)

                                                
                                    
x
+
TestMainNoArgs (0.24s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.24s)

                                                
                                    
x
+
TestMinikubeProfile (532.41s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-491300 --driver=hyperv
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-491300 --driver=hyperv: (3m16.2417355s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-491300 --driver=hyperv
E0409 00:33:10.508302    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-491300 --driver=hyperv: (3m18.9999841s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-491300
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (24.4992103s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-491300
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (24.5524862s)
helpers_test.go:175: Cleaning up "second-491300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-491300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-491300: (45.9399026s)
helpers_test.go:175: Cleaning up "first-491300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-491300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-491300: (41.498371s)
--- PASS: TestMinikubeProfile (532.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (154.29s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-936300 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E0409 00:38:10.511413    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-936300 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m33.2922823s)
--- PASS: TestMountStart/serial/StartWithMountFirst (154.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (9.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-936300 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-936300 ssh -- ls /minikube-host: (9.4336037s)
--- PASS: TestMountStart/serial/VerifyMountFirst (9.43s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (153.9s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-936300 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-936300 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m32.8973175s)
--- PASS: TestMountStart/serial/StartWithMountSecond (153.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (9.6s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-936300 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-936300 ssh -- ls /minikube-host: (9.5998363s)
--- PASS: TestMountStart/serial/VerifyMountSecond (9.60s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (30.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-936300 --alsologtostderr -v=5
E0409 00:42:53.610284    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-936300 --alsologtostderr -v=5: (30.6724253s)
--- PASS: TestMountStart/serial/DeleteFirst (30.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (9.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-936300 ssh -- ls /minikube-host
E0409 00:43:10.515788    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-936300 ssh -- ls /minikube-host: (9.4074118s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (9.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (26.34s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-936300
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-936300: (26.3431568s)
--- PASS: TestMountStart/serial/Stop (26.34s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (118.38s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-936300
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-936300: (1m57.3773128s)
--- PASS: TestMountStart/serial/RestartStopped (118.38s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (9.47s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-936300 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-936300 ssh -- ls /minikube-host: (9.4652877s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (9.47s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (429.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-611500 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0409 00:48:10.519998    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-611500 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (6m46.2425473s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-611500 status --alsologtostderr
E0409 00:53:10.524073    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-611500 status --alsologtostderr: (23.7080137s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (429.95s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (10.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-611500 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-611500 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-611500 -- rollout status deployment/busybox: (4.1120758s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-611500 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-611500 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-611500 -- exec busybox-58667487b6-c426d -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-611500 -- exec busybox-58667487b6-c426d -- nslookup kubernetes.io: (2.0636218s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-611500 -- exec busybox-58667487b6-q97dd -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-611500 -- exec busybox-58667487b6-c426d -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-611500 -- exec busybox-58667487b6-q97dd -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-611500 -- exec busybox-58667487b6-c426d -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-611500 -- exec busybox-58667487b6-q97dd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (10.11s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (241.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-611500 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-611500 -v 3 --alsologtostderr: (3m25.70962s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-611500 status --alsologtostderr
E0409 00:58:10.528323    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-611500 status --alsologtostderr: (35.7581495s)
--- PASS: TestMultiNode/serial/AddNode (241.47s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-611500 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.18s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (35.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (35.6518329s)
--- PASS: TestMultiNode/serial/ProfileList (35.65s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (359.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-611500 status --output json --alsologtostderr
E0409 00:59:33.626339    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-611500 status --output json --alsologtostderr: (35.1792724s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-611500 cp testdata\cp-test.txt multinode-611500:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-611500 cp testdata\cp-test.txt multinode-611500:/home/docker/cp-test.txt: (9.3119745s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-611500 ssh -n multinode-611500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-611500 ssh -n multinode-611500 "sudo cat /home/docker/cp-test.txt": (9.3746908s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-611500 cp multinode-611500:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile4275839031\001\cp-test_multinode-611500.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-611500 cp multinode-611500:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile4275839031\001\cp-test_multinode-611500.txt: (9.6244962s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-611500 ssh -n multinode-611500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-611500 ssh -n multinode-611500 "sudo cat /home/docker/cp-test.txt": (9.4225771s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-611500 cp multinode-611500:/home/docker/cp-test.txt multinode-611500-m02:/home/docker/cp-test_multinode-611500_multinode-611500-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-611500 cp multinode-611500:/home/docker/cp-test.txt multinode-611500-m02:/home/docker/cp-test_multinode-611500_multinode-611500-m02.txt: (16.3722063s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-611500 ssh -n multinode-611500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-611500 ssh -n multinode-611500 "sudo cat /home/docker/cp-test.txt": (9.419614s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-611500 ssh -n multinode-611500-m02 "sudo cat /home/docker/cp-test_multinode-611500_multinode-611500-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-611500 ssh -n multinode-611500-m02 "sudo cat /home/docker/cp-test_multinode-611500_multinode-611500-m02.txt": (9.3136475s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-611500 cp multinode-611500:/home/docker/cp-test.txt multinode-611500-m03:/home/docker/cp-test_multinode-611500_multinode-611500-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-611500 cp multinode-611500:/home/docker/cp-test.txt multinode-611500-m03:/home/docker/cp-test_multinode-611500_multinode-611500-m03.txt: (16.3835424s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-611500 ssh -n multinode-611500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-611500 ssh -n multinode-611500 "sudo cat /home/docker/cp-test.txt": (9.4159333s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-611500 ssh -n multinode-611500-m03 "sudo cat /home/docker/cp-test_multinode-611500_multinode-611500-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-611500 ssh -n multinode-611500-m03 "sudo cat /home/docker/cp-test_multinode-611500_multinode-611500-m03.txt": (9.4594907s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-611500 cp testdata\cp-test.txt multinode-611500-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-611500 cp testdata\cp-test.txt multinode-611500-m02:/home/docker/cp-test.txt: (9.4686328s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-611500 ssh -n multinode-611500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-611500 ssh -n multinode-611500-m02 "sudo cat /home/docker/cp-test.txt": (9.3754736s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-611500 cp multinode-611500-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile4275839031\001\cp-test_multinode-611500-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-611500 cp multinode-611500-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile4275839031\001\cp-test_multinode-611500-m02.txt: (9.3725297s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-611500 ssh -n multinode-611500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-611500 ssh -n multinode-611500-m02 "sudo cat /home/docker/cp-test.txt": (9.4480406s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-611500 cp multinode-611500-m02:/home/docker/cp-test.txt multinode-611500:/home/docker/cp-test_multinode-611500-m02_multinode-611500.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-611500 cp multinode-611500-m02:/home/docker/cp-test.txt multinode-611500:/home/docker/cp-test_multinode-611500-m02_multinode-611500.txt: (16.4619176s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-611500 ssh -n multinode-611500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-611500 ssh -n multinode-611500-m02 "sudo cat /home/docker/cp-test.txt": (9.5468785s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-611500 ssh -n multinode-611500 "sudo cat /home/docker/cp-test_multinode-611500-m02_multinode-611500.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-611500 ssh -n multinode-611500 "sudo cat /home/docker/cp-test_multinode-611500-m02_multinode-611500.txt": (9.4050514s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-611500 cp multinode-611500-m02:/home/docker/cp-test.txt multinode-611500-m03:/home/docker/cp-test_multinode-611500-m02_multinode-611500-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-611500 cp multinode-611500-m02:/home/docker/cp-test.txt multinode-611500-m03:/home/docker/cp-test_multinode-611500-m02_multinode-611500-m03.txt: (16.2410643s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-611500 ssh -n multinode-611500-m02 "sudo cat /home/docker/cp-test.txt"
E0409 01:03:10.532389    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-611500 ssh -n multinode-611500-m02 "sudo cat /home/docker/cp-test.txt": (9.3530823s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-611500 ssh -n multinode-611500-m03 "sudo cat /home/docker/cp-test_multinode-611500-m02_multinode-611500-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-611500 ssh -n multinode-611500-m03 "sudo cat /home/docker/cp-test_multinode-611500-m02_multinode-611500-m03.txt": (9.3336794s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-611500 cp testdata\cp-test.txt multinode-611500-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-611500 cp testdata\cp-test.txt multinode-611500-m03:/home/docker/cp-test.txt: (9.3924057s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-611500 ssh -n multinode-611500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-611500 ssh -n multinode-611500-m03 "sudo cat /home/docker/cp-test.txt": (9.4559624s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-611500 cp multinode-611500-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile4275839031\001\cp-test_multinode-611500-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-611500 cp multinode-611500-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile4275839031\001\cp-test_multinode-611500-m03.txt: (9.3757905s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-611500 ssh -n multinode-611500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-611500 ssh -n multinode-611500-m03 "sudo cat /home/docker/cp-test.txt": (9.3050014s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-611500 cp multinode-611500-m03:/home/docker/cp-test.txt multinode-611500:/home/docker/cp-test_multinode-611500-m03_multinode-611500.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-611500 cp multinode-611500-m03:/home/docker/cp-test.txt multinode-611500:/home/docker/cp-test_multinode-611500-m03_multinode-611500.txt: (16.4493567s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-611500 ssh -n multinode-611500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-611500 ssh -n multinode-611500-m03 "sudo cat /home/docker/cp-test.txt": (9.4286341s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-611500 ssh -n multinode-611500 "sudo cat /home/docker/cp-test_multinode-611500-m03_multinode-611500.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-611500 ssh -n multinode-611500 "sudo cat /home/docker/cp-test_multinode-611500-m03_multinode-611500.txt": (9.4361984s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-611500 cp multinode-611500-m03:/home/docker/cp-test.txt multinode-611500-m02:/home/docker/cp-test_multinode-611500-m03_multinode-611500-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-611500 cp multinode-611500-m03:/home/docker/cp-test.txt multinode-611500-m02:/home/docker/cp-test_multinode-611500-m03_multinode-611500-m02.txt: (16.2958442s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-611500 ssh -n multinode-611500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-611500 ssh -n multinode-611500-m03 "sudo cat /home/docker/cp-test.txt": (9.4204224s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-611500 ssh -n multinode-611500-m02 "sudo cat /home/docker/cp-test_multinode-611500-m03_multinode-611500-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-611500 ssh -n multinode-611500-m02 "sudo cat /home/docker/cp-test_multinode-611500-m03_multinode-611500-m02.txt": (9.4793472s)
--- PASS: TestMultiNode/serial/CopyFile (359.34s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (76.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-611500 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-611500 node stop m03: (24.6800767s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-611500 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-611500 status: exit status 7 (26.1693243s)

                                                
                                                
-- stdout --
	multinode-611500
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-611500-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-611500-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-611500 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-611500 status --alsologtostderr: exit status 7 (26.0886096s)

                                                
                                                
-- stdout --
	multinode-611500
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-611500-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-611500-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0409 01:06:03.381607   11980 out.go:345] Setting OutFile to fd 1972 ...
	I0409 01:06:03.453604   11980 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0409 01:06:03.453604   11980 out.go:358] Setting ErrFile to fd 1976...
	I0409 01:06:03.453604   11980 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0409 01:06:03.470167   11980 out.go:352] Setting JSON to false
	I0409 01:06:03.470167   11980 mustload.go:65] Loading cluster: multinode-611500
	I0409 01:06:03.470167   11980 notify.go:220] Checking for updates...
	I0409 01:06:03.471443   11980 config.go:182] Loaded profile config "multinode-611500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0409 01:06:03.471443   11980 status.go:174] checking status of multinode-611500 ...
	I0409 01:06:03.471690   11980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:06:05.655738   11980 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:06:05.655855   11980 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:06:05.655855   11980 status.go:371] multinode-611500 host status = "Running" (err=<nil>)
	I0409 01:06:05.655855   11980 host.go:66] Checking if "multinode-611500" exists ...
	I0409 01:06:05.656643   11980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:06:07.786244   11980 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:06:07.787058   11980 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:06:07.787224   11980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:06:10.392677   11980 main.go:141] libmachine: [stdout =====>] : 192.168.113.157
	
	I0409 01:06:10.392746   11980 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:06:10.392746   11980 host.go:66] Checking if "multinode-611500" exists ...
	I0409 01:06:10.408408   11980 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0409 01:06:10.408408   11980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500 ).state
	I0409 01:06:12.560685   11980 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:06:12.561627   11980 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:06:12.561715   11980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500 ).networkadapters[0]).ipaddresses[0]
	I0409 01:06:15.194514   11980 main.go:141] libmachine: [stdout =====>] : 192.168.113.157
	
	I0409 01:06:15.194514   11980 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:06:15.194717   11980 sshutil.go:53] new ssh client: &{IP:192.168.113.157 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-611500\id_rsa Username:docker}
	I0409 01:06:15.305685   11980 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8971282s)
	I0409 01:06:15.320483   11980 ssh_runner.go:195] Run: systemctl --version
	I0409 01:06:15.344241   11980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0409 01:06:15.367821   11980 kubeconfig.go:125] found "multinode-611500" server: "https://192.168.113.157:8443"
	I0409 01:06:15.367955   11980 api_server.go:166] Checking apiserver status ...
	I0409 01:06:15.381164   11980 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0409 01:06:15.419771   11980 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2104/cgroup
	W0409 01:06:15.437244   11980 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2104/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0409 01:06:15.451262   11980 ssh_runner.go:195] Run: ls
	I0409 01:06:15.458623   11980 api_server.go:253] Checking apiserver healthz at https://192.168.113.157:8443/healthz ...
	I0409 01:06:15.467308   11980 api_server.go:279] https://192.168.113.157:8443/healthz returned 200:
	ok
	I0409 01:06:15.467308   11980 status.go:463] multinode-611500 apiserver status = Running (err=<nil>)
	I0409 01:06:15.467308   11980 status.go:176] multinode-611500 status: &{Name:multinode-611500 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0409 01:06:15.467308   11980 status.go:174] checking status of multinode-611500-m02 ...
	I0409 01:06:15.468328   11980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 01:06:17.602947   11980 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:06:17.602947   11980 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:06:17.602947   11980 status.go:371] multinode-611500-m02 host status = "Running" (err=<nil>)
	I0409 01:06:17.604074   11980 host.go:66] Checking if "multinode-611500-m02" exists ...
	I0409 01:06:17.604936   11980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 01:06:19.724913   11980 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:06:19.724913   11980 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:06:19.725268   11980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 01:06:22.268784   11980 main.go:141] libmachine: [stdout =====>] : 192.168.113.143
	
	I0409 01:06:22.268784   11980 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:06:22.268784   11980 host.go:66] Checking if "multinode-611500-m02" exists ...
	I0409 01:06:22.283126   11980 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0409 01:06:22.283238   11980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m02 ).state
	I0409 01:06:24.430816   11980 main.go:141] libmachine: [stdout =====>] : Running
	
	I0409 01:06:24.431048   11980 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:06:24.431048   11980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-611500-m02 ).networkadapters[0]).ipaddresses[0]
	I0409 01:06:26.965062   11980 main.go:141] libmachine: [stdout =====>] : 192.168.113.143
	
	I0409 01:06:26.965062   11980 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:06:26.965062   11980 sshutil.go:53] new ssh client: &{IP:192.168.113.143 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-611500-m02\id_rsa Username:docker}
	I0409 01:06:27.071022   11980 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.7878342s)
	I0409 01:06:27.086505   11980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0409 01:06:27.117391   11980 status.go:176] multinode-611500-m02 status: &{Name:multinode-611500-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0409 01:06:27.117491   11980 status.go:174] checking status of multinode-611500-m03 ...
	I0409 01:06:27.118553   11980 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-611500-m03 ).state
	I0409 01:06:29.320651   11980 main.go:141] libmachine: [stdout =====>] : Off
	
	I0409 01:06:29.320821   11980 main.go:141] libmachine: [stderr =====>] : 
	I0409 01:06:29.320878   11980 status.go:371] multinode-611500-m03 host status = "Stopped" (err=<nil>)
	I0409 01:06:29.320878   11980 status.go:384] host is not running, skipping remaining checks
	I0409 01:06:29.320878   11980 status.go:176] multinode-611500-m03 status: &{Name:multinode-611500-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (76.94s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (194.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-611500 node start m03 -v=7 --alsologtostderr
E0409 01:08:10.535272    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-611500 node start m03 -v=7 --alsologtostderr: (2m38.7570005s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-611500 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-611500 status -v=7 --alsologtostderr: (35.9311701s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (194.87s)

                                                
                                    
x
+
TestPreload (574.41s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-135700 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0409 01:23:10.546574    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-135700 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (4m28.7401046s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-135700 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-135700 image pull gcr.io/k8s-minikube/busybox: (8.865178s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-135700
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-135700: (39.6195341s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-135700 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-135700 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (3m28.1959803s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-135700 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-135700 image list: (7.1834215s)
helpers_test.go:175: Cleaning up "test-preload-135700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-135700
E0409 01:28:10.551160    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-135700: (41.8039288s)
--- PASS: TestPreload (574.41s)

                                                
                                    
x
+
TestScheduledStopWindows (329.63s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-703200 --memory=2048 --driver=hyperv
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-703200 --memory=2048 --driver=hyperv: (3m17.3550917s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-703200 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-703200 --schedule 5m: (10.9937959s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-703200 -n scheduled-stop-703200
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-703200 -n scheduled-stop-703200: exit status 1 (10.0111067s)
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-703200 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-703200 -- sudo systemctl show minikube-scheduled-stop --no-page: (9.5714253s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-703200 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-703200 --schedule 5s: (10.637881s)
E0409 01:32:53.657730    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0409 01:33:10.554843    9864 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-582000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-703200
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-703200: exit status 7 (2.4224624s)

                                                
                                                
-- stdout --
	scheduled-stop-703200
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-703200 -n scheduled-stop-703200
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-703200 -n scheduled-stop-703200: exit status 7 (2.3115947s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-703200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-703200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-703200: (26.3164833s)
--- PASS: TestScheduledStopWindows (329.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-159600 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-159600 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (386.587ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-159600] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=20501
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.39s)

                                                
                                    

Test skip (23/140)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard